Test Report: Docker_macOS 17345

                    
                      57fac428b5f480c5d5720c0006970cf71a80e13d:2023-10-03:31284
                    
                

Test fail (23/181)

x
+
TestOffline (755.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-214000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-214000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m34.762171048s)

                                                
                                                
-- stdout --
	* [offline-docker-214000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-214000 in cluster offline-docker-214000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-214000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:16:15.433907   18398 out.go:296] Setting OutFile to fd 1 ...
	I1003 19:16:15.434188   18398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 19:16:15.434192   18398 out.go:309] Setting ErrFile to fd 2...
	I1003 19:16:15.434196   18398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 19:16:15.434403   18398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 19:16:15.436074   18398 out.go:303] Setting JSON to false
	I1003 19:16:15.459896   18398 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9943,"bootTime":1696375832,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 19:16:15.460023   18398 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 19:16:15.481438   18398 out.go:177] * [offline-docker-214000] minikube v1.31.2 on Darwin 14.0
	I1003 19:16:15.523554   18398 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 19:16:15.523590   18398 notify.go:220] Checking for updates...
	I1003 19:16:15.565515   18398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 19:16:15.586643   18398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 19:16:15.607478   18398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:16:15.628635   18398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 19:16:15.649671   18398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:16:15.670779   18398 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 19:16:15.729006   18398 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 19:16:15.729155   18398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:16:15.900801   18398 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:150 SystemTime:2023-10-04 02:16:15.852461007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 19:16:15.921518   18398 out.go:177] * Using the docker driver based on user configuration
	I1003 19:16:15.942283   18398 start.go:298] selected driver: docker
	I1003 19:16:15.942300   18398 start.go:902] validating driver "docker" against <nil>
	I1003 19:16:15.942308   18398 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:16:15.944959   18398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:16:16.045794   18398 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:150 SystemTime:2023-10-04 02:16:16.033685114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 19:16:16.045963   18398 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 19:16:16.046181   18398 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:16:16.067476   18398 out.go:177] * Using Docker Desktop driver with root privileges
	I1003 19:16:16.088537   18398 cni.go:84] Creating CNI manager for ""
	I1003 19:16:16.088574   18398 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 19:16:16.088590   18398 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 19:16:16.088622   18398 start_flags.go:321] config:
	{Name:offline-docker-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 19:16:16.131487   18398 out.go:177] * Starting control plane node offline-docker-214000 in cluster offline-docker-214000
	I1003 19:16:16.173634   18398 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 19:16:16.215612   18398 out.go:177] * Pulling base image ...
	I1003 19:16:16.257431   18398 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 19:16:16.257462   18398 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 19:16:16.257486   18398 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 19:16:16.257496   18398 cache.go:57] Caching tarball of preloaded images
	I1003 19:16:16.257602   18398 preload.go:174] Found /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 19:16:16.257613   18398 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 19:16:16.258549   18398 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/offline-docker-214000/config.json ...
	I1003 19:16:16.258608   18398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/offline-docker-214000/config.json: {Name:mkc3f625f49baeca047c3b8ff5635533f8bdff6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:16:16.309856   18398 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 19:16:16.309873   18398 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 19:16:16.309891   18398 cache.go:195] Successfully downloaded all kic artifacts
	I1003 19:16:16.309932   18398 start.go:365] acquiring machines lock for offline-docker-214000: {Name:mkf7d29a613d1f4484fde3b497f52f3bb1ed3676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:16:16.310068   18398 start.go:369] acquired machines lock for "offline-docker-214000" in 124.434µs
	I1003 19:16:16.310095   18398 start.go:93] Provisioning new machine with config: &{Name:offline-docker-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-214000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 19:16:16.310200   18398 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:16:16.331581   18398 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1003 19:16:16.332024   18398 start.go:159] libmachine.API.Create for "offline-docker-214000" (driver="docker")
	I1003 19:16:16.332079   18398 client.go:168] LocalClient.Create starting
	I1003 19:16:16.332271   18398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 19:16:16.332359   18398 main.go:141] libmachine: Decoding PEM data...
	I1003 19:16:16.332395   18398 main.go:141] libmachine: Parsing certificate...
	I1003 19:16:16.332558   18398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 19:16:16.332621   18398 main.go:141] libmachine: Decoding PEM data...
	I1003 19:16:16.332637   18398 main.go:141] libmachine: Parsing certificate...
	I1003 19:16:16.333650   18398 cli_runner.go:164] Run: docker network inspect offline-docker-214000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:16:16.406168   18398 cli_runner.go:211] docker network inspect offline-docker-214000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:16:16.406309   18398 network_create.go:281] running [docker network inspect offline-docker-214000] to gather additional debugging logs...
	I1003 19:16:16.406324   18398 cli_runner.go:164] Run: docker network inspect offline-docker-214000
	W1003 19:16:16.528353   18398 cli_runner.go:211] docker network inspect offline-docker-214000 returned with exit code 1
	I1003 19:16:16.528390   18398 network_create.go:284] error running [docker network inspect offline-docker-214000]: docker network inspect offline-docker-214000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-214000 not found
	I1003 19:16:16.528406   18398 network_create.go:286] output of [docker network inspect offline-docker-214000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-214000 not found
	
	** /stderr **
	I1003 19:16:16.528593   18398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:16:16.582919   18398 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:16:16.583359   18398 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000eec4e0}
	I1003 19:16:16.583390   18398 network_create.go:124] attempt to create docker network offline-docker-214000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1003 19:16:16.583464   18398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-214000 offline-docker-214000
	I1003 19:16:16.673867   18398 network_create.go:108] docker network offline-docker-214000 192.168.58.0/24 created
	I1003 19:16:16.673911   18398 kic.go:117] calculated static IP "192.168.58.2" for the "offline-docker-214000" container
	I1003 19:16:16.674037   18398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:16:16.728474   18398 cli_runner.go:164] Run: docker volume create offline-docker-214000 --label name.minikube.sigs.k8s.io=offline-docker-214000 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:16:16.781303   18398 oci.go:103] Successfully created a docker volume offline-docker-214000
	I1003 19:16:16.781419   18398 cli_runner.go:164] Run: docker run --rm --name offline-docker-214000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-214000 --entrypoint /usr/bin/test -v offline-docker-214000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 19:16:17.397478   18398 oci.go:107] Successfully prepared a docker volume offline-docker-214000
	I1003 19:16:17.397511   18398 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 19:16:17.397531   18398 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 19:16:17.397635   18398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-214000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:22:16.346018   18398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:22:16.346151   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:16.401077   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:16.401196   18398 retry.go:31] will retry after 175.808031ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:16.579340   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:16.630626   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:16.630739   18398 retry.go:31] will retry after 414.797812ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:17.047563   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:17.103841   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:17.103934   18398 retry.go:31] will retry after 336.915766ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:17.442058   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:17.497899   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:22:17.498017   18398 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:22:17.498049   18398 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:17.498100   18398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:22:17.498155   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:17.547545   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:17.547651   18398 retry.go:31] will retry after 252.82028ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:17.801930   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:17.856925   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:17.857015   18398 retry.go:31] will retry after 252.779427ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:18.112197   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:18.217223   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:22:18.217319   18398 retry.go:31] will retry after 643.635772ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:18.863378   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:22:18.916805   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:22:18.916923   18398 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:22:18.916952   18398 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:18.916963   18398 start.go:128] duration metric: createHost completed in 6m2.594132854s
	I1003 19:22:18.916970   18398 start.go:83] releasing machines lock for "offline-docker-214000", held for 6m2.594273586s
	W1003 19:22:18.916982   18398 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1003 19:22:18.917428   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:18.966733   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:18.966789   18398 delete.go:82] Unable to get host status for offline-docker-214000, assuming it has already been deleted: state: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	W1003 19:22:18.966897   18398 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1003 19:22:18.966908   18398 start.go:703] Will try again in 5 seconds ...
	I1003 19:22:23.969302   18398 start.go:365] acquiring machines lock for offline-docker-214000: {Name:mkf7d29a613d1f4484fde3b497f52f3bb1ed3676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:22:23.970241   18398 start.go:369] acquired machines lock for "offline-docker-214000" in 852.322µs
	I1003 19:22:23.970307   18398 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:22:23.970322   18398 fix.go:54] fixHost starting: 
	I1003 19:22:23.970864   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:24.022209   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:24.022254   18398 fix.go:102] recreateIfNeeded on offline-docker-214000: state= err=unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:24.022272   18398 fix.go:107] machineExists: false. err=machine does not exist
	I1003 19:22:24.044084   18398 out.go:177] * docker "offline-docker-214000" container is missing, will recreate.
	I1003 19:22:24.086542   18398 delete.go:124] DEMOLISHING offline-docker-214000 ...
	I1003 19:22:24.086766   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:24.138539   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	W1003 19:22:24.138595   18398 stop.go:75] unable to get state: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:24.138613   18398 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:24.138981   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:24.188834   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:24.188903   18398 delete.go:82] Unable to get host status for offline-docker-214000, assuming it has already been deleted: state: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:24.189003   18398 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-214000
	W1003 19:22:24.238681   18398 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-214000 returned with exit code 1
	I1003 19:22:24.238713   18398 kic.go:367] could not find the container offline-docker-214000 to remove it. will try anyways
	I1003 19:22:24.238791   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:24.290355   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	W1003 19:22:24.290406   18398 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:24.290492   18398 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-214000 /bin/bash -c "sudo init 0"
	W1003 19:22:24.339825   18398 cli_runner.go:211] docker exec --privileged -t offline-docker-214000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 19:22:24.339862   18398 oci.go:647] error shutdown offline-docker-214000: docker exec --privileged -t offline-docker-214000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:25.340661   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:25.395296   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:25.395343   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:25.395354   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:25.395373   18398 retry.go:31] will retry after 656.858772ms: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:26.054553   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:26.108250   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:26.108293   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:26.108312   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:26.108335   18398 retry.go:31] will retry after 557.709975ms: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:26.668426   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:26.721677   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:26.721729   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:26.721746   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:26.721768   18398 retry.go:31] will retry after 841.047522ms: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:27.565255   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:27.618210   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:27.618256   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:27.618268   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:27.618289   18398 retry.go:31] will retry after 1.742746825s: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:29.361466   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:29.415366   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:29.415420   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:29.415432   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:29.415452   18398 retry.go:31] will retry after 3.601611909s: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:33.017909   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:33.074922   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:33.074978   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:33.074990   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:33.075012   18398 retry.go:31] will retry after 3.710973481s: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:36.787475   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:36.842636   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:36.842681   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:36.842696   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:36.842718   18398 retry.go:31] will retry after 5.377194766s: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:42.221172   18398 cli_runner.go:164] Run: docker container inspect offline-docker-214000 --format={{.State.Status}}
	W1003 19:22:42.273039   18398 cli_runner.go:211] docker container inspect offline-docker-214000 --format={{.State.Status}} returned with exit code 1
	I1003 19:22:42.273083   18398 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:22:42.273096   18398 oci.go:661] temporary error: container offline-docker-214000 status is  but expect it to be exited
	I1003 19:22:42.273125   18398 oci.go:88] couldn't shut down offline-docker-214000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	 
	I1003 19:22:42.273212   18398 cli_runner.go:164] Run: docker rm -f -v offline-docker-214000
	I1003 19:22:42.323972   18398 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-214000
	W1003 19:22:42.374156   18398 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-214000 returned with exit code 1
	I1003 19:22:42.374262   18398 cli_runner.go:164] Run: docker network inspect offline-docker-214000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:22:42.425362   18398 cli_runner.go:164] Run: docker network rm offline-docker-214000
	I1003 19:22:42.521580   18398 fix.go:114] Sleeping 1 second for extra luck!
	I1003 19:22:43.523179   18398 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:22:43.545027   18398 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1003 19:22:43.545196   18398 start.go:159] libmachine.API.Create for "offline-docker-214000" (driver="docker")
	I1003 19:22:43.545224   18398 client.go:168] LocalClient.Create starting
	I1003 19:22:43.545432   18398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 19:22:43.545517   18398 main.go:141] libmachine: Decoding PEM data...
	I1003 19:22:43.545541   18398 main.go:141] libmachine: Parsing certificate...
	I1003 19:22:43.545618   18398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 19:22:43.545678   18398 main.go:141] libmachine: Decoding PEM data...
	I1003 19:22:43.545703   18398 main.go:141] libmachine: Parsing certificate...
	I1003 19:22:43.546408   18398 cli_runner.go:164] Run: docker network inspect offline-docker-214000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:22:43.601024   18398 cli_runner.go:211] docker network inspect offline-docker-214000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:22:43.601115   18398 network_create.go:281] running [docker network inspect offline-docker-214000] to gather additional debugging logs...
	I1003 19:22:43.601131   18398 cli_runner.go:164] Run: docker network inspect offline-docker-214000
	W1003 19:22:43.651647   18398 cli_runner.go:211] docker network inspect offline-docker-214000 returned with exit code 1
	I1003 19:22:43.651676   18398 network_create.go:284] error running [docker network inspect offline-docker-214000]: docker network inspect offline-docker-214000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-214000 not found
	I1003 19:22:43.651693   18398 network_create.go:286] output of [docker network inspect offline-docker-214000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-214000 not found
	
	** /stderr **
	I1003 19:22:43.651837   18398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:22:43.704015   18398 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:22:43.705425   18398 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:22:43.705795   18398 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e3dec0}
	I1003 19:22:43.705809   18398 network_create.go:124] attempt to create docker network offline-docker-214000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1003 19:22:43.705899   18398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-214000 offline-docker-214000
	W1003 19:22:43.756583   18398 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-214000 offline-docker-214000 returned with exit code 1
	W1003 19:22:43.756647   18398 network_create.go:149] failed to create docker network offline-docker-214000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-214000 offline-docker-214000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1003 19:22:43.756663   18398 network_create.go:116] failed to create docker network offline-docker-214000 192.168.67.0/24, will retry: subnet is taken
	I1003 19:22:43.758175   18398 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:22:43.758554   18398 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fc2420}
	I1003 19:22:43.758566   18398 network_create.go:124] attempt to create docker network offline-docker-214000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1003 19:22:43.758655   18398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-214000 offline-docker-214000
	I1003 19:22:43.845686   18398 network_create.go:108] docker network offline-docker-214000 192.168.76.0/24 created
	I1003 19:22:43.845717   18398 kic.go:117] calculated static IP "192.168.76.2" for the "offline-docker-214000" container
	I1003 19:22:43.845835   18398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:22:43.898979   18398 cli_runner.go:164] Run: docker volume create offline-docker-214000 --label name.minikube.sigs.k8s.io=offline-docker-214000 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:22:43.948988   18398 oci.go:103] Successfully created a docker volume offline-docker-214000
	I1003 19:22:43.949112   18398 cli_runner.go:164] Run: docker run --rm --name offline-docker-214000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-214000 --entrypoint /usr/bin/test -v offline-docker-214000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 19:22:44.282459   18398 oci.go:107] Successfully prepared a docker volume offline-docker-214000
	I1003 19:22:44.282490   18398 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 19:22:44.282504   18398 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 19:22:44.282607   18398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-214000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:28:43.560476   18398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:28:43.560592   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:43.615316   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:43.615427   18398 retry.go:31] will retry after 363.574557ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:43.981421   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:44.032796   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:44.032908   18398 retry.go:31] will retry after 400.432562ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:44.435080   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:44.488500   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:44.488642   18398 retry.go:31] will retry after 835.343826ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:45.326482   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:45.381641   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:28:45.381755   18398 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:28:45.381781   18398 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:45.381837   18398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:28:45.381896   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:45.432441   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:45.432538   18398 retry.go:31] will retry after 209.043606ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:45.642541   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:45.697586   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:45.697702   18398 retry.go:31] will retry after 364.143721ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:46.064320   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:46.118347   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:46.118446   18398 retry.go:31] will retry after 717.885553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:46.838784   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:46.890055   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:28:46.890161   18398 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:28:46.890184   18398 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:46.890193   18398 start.go:128] duration metric: createHost completed in 6m3.353714933s
	I1003 19:28:46.890261   18398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:28:46.890331   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:46.940191   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:46.940281   18398 retry.go:31] will retry after 331.023002ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:47.273628   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:47.326673   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:47.326764   18398 retry.go:31] will retry after 534.331513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:47.863045   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:47.916372   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:47.916460   18398 retry.go:31] will retry after 647.914861ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:48.566827   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:48.619571   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:28:48.619668   18398 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:28:48.619695   18398 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:48.619751   18398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:28:48.619803   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:48.669171   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:48.669268   18398 retry.go:31] will retry after 207.959659ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:48.879261   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:48.933622   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:48.933709   18398 retry.go:31] will retry after 536.128212ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:49.472306   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:49.526343   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	I1003 19:28:49.526445   18398 retry.go:31] will retry after 450.040039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:49.977955   18398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000
	W1003 19:28:50.031465   18398 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000 returned with exit code 1
	W1003 19:28:50.031572   18398 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	
	W1003 19:28:50.031603   18398 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-214000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-214000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000
	I1003 19:28:50.031612   18398 fix.go:56] fixHost completed within 6m26.047414952s
	I1003 19:28:50.031618   18398 start.go:83] releasing machines lock for "offline-docker-214000", held for 6m26.047470657s
	W1003 19:28:50.031701   18398 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-214000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-214000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1003 19:28:50.074892   18398 out.go:177] 
	W1003 19:28:50.096091   18398 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1003 19:28:50.096148   18398 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1003 19:28:50.096180   18398 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1003 19:28:50.118718   18398 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-214000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-10-03 19:28:50.193129 -0700 PDT m=+5995.089591221
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-214000
helpers_test.go:235: (dbg) docker inspect offline-docker-214000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-214000",
	        "Id": "2377816bd16562cdac237ebd79589c6a65e85030e1b4c152a35a5d1e80b09148",
	        "Created": "2023-10-04T02:22:43.804847116Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-214000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-214000 -n offline-docker-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-214000 -n offline-docker-214000: exit status 7 (94.133756ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:28:50.341649   19013 status.go:249] status error: host: state: unknown state "offline-docker-214000": docker container inspect offline-docker-214000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-214000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-214000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-214000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-214000
--- FAIL: TestOffline (755.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (265.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-010000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1003 18:01:59.773339   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:02:27.464871   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:02:47.989050   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:47.995556   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.006967   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.029173   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.119164   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.199358   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.359481   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:48.681647   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:49.324044   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:50.606317   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:53.166565   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:02:58.288944   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:03:08.530108   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:03:29.011945   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:04:09.973741   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-010000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m25.090720756s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-010000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-010000 in cluster ingress-addon-legacy-010000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:00:03.928211   13818 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:00:03.928511   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:00:03.928515   13818 out.go:309] Setting ErrFile to fd 2...
	I1003 18:00:03.928519   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:00:03.928696   13818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:00:03.930244   13818 out.go:303] Setting JSON to false
	I1003 18:00:03.952067   13818 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5371,"bootTime":1696375832,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 18:00:03.952178   13818 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:00:03.973686   13818 out.go:177] * [ingress-addon-legacy-010000] minikube v1.31.2 on Darwin 14.0
	I1003 18:00:04.037512   13818 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 18:00:04.015968   13818 notify.go:220] Checking for updates...
	I1003 18:00:04.080799   13818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 18:00:04.102681   13818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:00:04.124866   13818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:00:04.146791   13818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 18:00:04.168803   13818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:00:04.196103   13818 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:00:04.254056   13818 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:00:04.254180   13818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:00:04.355505   13818 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:00:04.343851965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:00:04.397436   13818 out.go:177] * Using the docker driver based on user configuration
	I1003 18:00:04.418571   13818 start.go:298] selected driver: docker
	I1003 18:00:04.418600   13818 start.go:902] validating driver "docker" against <nil>
	I1003 18:00:04.418615   13818 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:00:04.422961   13818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:00:04.521954   13818 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:00:04.511270146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:00:04.522130   13818 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 18:00:04.522333   13818 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:00:04.543481   13818 out.go:177] * Using Docker Desktop driver with root privileges
	I1003 18:00:04.564291   13818 cni.go:84] Creating CNI manager for ""
	I1003 18:00:04.564330   13818 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 18:00:04.564347   13818 start_flags.go:321] config:
	{Name:ingress-addon-legacy-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-010000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:00:04.586275   13818 out.go:177] * Starting control plane node ingress-addon-legacy-010000 in cluster ingress-addon-legacy-010000
	I1003 18:00:04.607541   13818 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:00:04.628261   13818 out.go:177] * Pulling base image ...
	I1003 18:00:04.649528   13818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:00:04.649625   13818 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 18:00:04.701104   13818 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 18:00:04.701134   13818 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 18:00:04.710058   13818 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1003 18:00:04.710080   13818 cache.go:57] Caching tarball of preloaded images
	I1003 18:00:04.710281   13818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:00:04.731484   13818 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1003 18:00:04.773477   13818 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:00:04.857058   13818 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1003 18:00:12.019062   13818 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:00:12.019244   13818 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:00:12.642613   13818 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1003 18:00:12.642857   13818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/config.json ...
	I1003 18:00:12.642883   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/config.json: {Name:mkb9a6b4219967df858e326dee8ff547c4506edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:12.643219   13818 cache.go:195] Successfully downloaded all kic artifacts
	I1003 18:00:12.643247   13818 start.go:365] acquiring machines lock for ingress-addon-legacy-010000: {Name:mk4e71ea5978bef6cf1fe23d7de82860e9a51ef1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:00:12.643400   13818 start.go:369] acquired machines lock for "ingress-addon-legacy-010000" in 106.906µs
	I1003 18:00:12.643424   13818 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-010000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 18:00:12.643502   13818 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:00:12.697255   13818 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1003 18:00:12.697666   13818 start.go:159] libmachine.API.Create for "ingress-addon-legacy-010000" (driver="docker")
	I1003 18:00:12.697753   13818 client.go:168] LocalClient.Create starting
	I1003 18:00:12.697935   13818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:00:12.698034   13818 main.go:141] libmachine: Decoding PEM data...
	I1003 18:00:12.698072   13818 main.go:141] libmachine: Parsing certificate...
	I1003 18:00:12.698180   13818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:00:12.698258   13818 main.go:141] libmachine: Decoding PEM data...
	I1003 18:00:12.698275   13818 main.go:141] libmachine: Parsing certificate...
	I1003 18:00:12.699138   13818 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-010000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:00:12.753953   13818 cli_runner.go:211] docker network inspect ingress-addon-legacy-010000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:00:12.754071   13818 network_create.go:281] running [docker network inspect ingress-addon-legacy-010000] to gather additional debugging logs...
	I1003 18:00:12.754088   13818 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-010000
	W1003 18:00:12.804828   13818 cli_runner.go:211] docker network inspect ingress-addon-legacy-010000 returned with exit code 1
	I1003 18:00:12.804864   13818 network_create.go:284] error running [docker network inspect ingress-addon-legacy-010000]: docker network inspect ingress-addon-legacy-010000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-010000 not found
	I1003 18:00:12.804881   13818 network_create.go:286] output of [docker network inspect ingress-addon-legacy-010000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-010000 not found
	
	** /stderr **
	I1003 18:00:12.805055   13818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:00:12.856757   13818 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002939a0}
	I1003 18:00:12.856793   13818 network_create.go:124] attempt to create docker network ingress-addon-legacy-010000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1003 18:00:12.856861   13818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-010000 ingress-addon-legacy-010000
	I1003 18:00:12.944008   13818 network_create.go:108] docker network ingress-addon-legacy-010000 192.168.49.0/24 created
	I1003 18:00:12.944062   13818 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-010000" container
	I1003 18:00:12.944200   13818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:00:12.995016   13818 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-010000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-010000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:00:13.048176   13818 oci.go:103] Successfully created a docker volume ingress-addon-legacy-010000
	I1003 18:00:13.048294   13818 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-010000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-010000 --entrypoint /usr/bin/test -v ingress-addon-legacy-010000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:00:13.459503   13818 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-010000
	I1003 18:00:13.459543   13818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:00:13.459559   13818 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:00:13.459671   13818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-010000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:00:16.251400   13818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-010000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (2.791616008s)
	I1003 18:00:16.251424   13818 kic.go:199] duration metric: took 2.791814 seconds to extract preloaded images to volume
	I1003 18:00:16.251540   13818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:00:16.352107   13818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-010000 --name ingress-addon-legacy-010000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-010000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-010000 --network ingress-addon-legacy-010000 --ip 192.168.49.2 --volume ingress-addon-legacy-010000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1003 18:00:16.632250   13818 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Running}}
	I1003 18:00:16.686344   13818 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:00:16.742850   13818 cli_runner.go:164] Run: docker exec ingress-addon-legacy-010000 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:00:16.856477   13818 oci.go:144] the created container "ingress-addon-legacy-010000" has a running status.
	I1003 18:00:16.856518   13818 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa...
	I1003 18:00:16.935582   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:00:16.935669   13818 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:00:17.007337   13818 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:00:17.066884   13818 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:00:17.066923   13818 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-010000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:00:17.175028   13818 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:00:17.228086   13818 machine.go:88] provisioning docker machine ...
	I1003 18:00:17.228135   13818 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-010000"
	I1003 18:00:17.228246   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:17.284144   13818 main.go:141] libmachine: Using SSH client type: native
	I1003 18:00:17.284535   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 58340 <nil> <nil>}
	I1003 18:00:17.284558   13818 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-010000 && echo "ingress-addon-legacy-010000" | sudo tee /etc/hostname
	I1003 18:00:17.433213   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-010000
	
	I1003 18:00:17.433308   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:17.484382   13818 main.go:141] libmachine: Using SSH client type: native
	I1003 18:00:17.484695   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 58340 <nil> <nil>}
	I1003 18:00:17.484712   13818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-010000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-010000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-010000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:00:17.622083   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:00:17.622107   13818 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-10413/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-10413/.minikube}
	I1003 18:00:17.622130   13818 ubuntu.go:177] setting up certificates
	I1003 18:00:17.622140   13818 provision.go:83] configureAuth start
	I1003 18:00:17.622222   13818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-010000
	I1003 18:00:17.673381   13818 provision.go:138] copyHostCerts
	I1003 18:00:17.673420   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17345-10413/.minikube/cert.pem
	I1003 18:00:17.673466   13818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-10413/.minikube/cert.pem, removing ...
	I1003 18:00:17.673475   13818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-10413/.minikube/cert.pem
	I1003 18:00:17.673576   13818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-10413/.minikube/cert.pem (1123 bytes)
	I1003 18:00:17.673790   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17345-10413/.minikube/key.pem
	I1003 18:00:17.673824   13818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-10413/.minikube/key.pem, removing ...
	I1003 18:00:17.673829   13818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-10413/.minikube/key.pem
	I1003 18:00:17.673901   13818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-10413/.minikube/key.pem (1675 bytes)
	I1003 18:00:17.674037   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.pem
	I1003 18:00:17.674062   13818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.pem, removing ...
	I1003 18:00:17.674066   13818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.pem
	I1003 18:00:17.674224   13818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.pem (1082 bytes)
	I1003 18:00:17.674365   13818 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-010000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-010000]
	I1003 18:00:17.715323   13818 provision.go:172] copyRemoteCerts
	I1003 18:00:17.715381   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:00:17.715441   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:17.767205   13818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:00:17.864482   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:00:17.864553   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:00:17.886960   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:00:17.887030   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1003 18:00:17.910025   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:00:17.910097   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:00:17.933110   13818 provision.go:86] duration metric: configureAuth took 310.950849ms
	I1003 18:00:17.933125   13818 ubuntu.go:193] setting minikube options for container-runtime
	I1003 18:00:17.933273   13818 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:00:17.933343   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:17.984438   13818 main.go:141] libmachine: Using SSH client type: native
	I1003 18:00:17.984758   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 58340 <nil> <nil>}
	I1003 18:00:17.984776   13818 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 18:00:18.122355   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1003 18:00:18.122369   13818 ubuntu.go:71] root file system type: overlay
	I1003 18:00:18.122462   13818 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 18:00:18.122543   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:18.173799   13818 main.go:141] libmachine: Using SSH client type: native
	I1003 18:00:18.174104   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 58340 <nil> <nil>}
	I1003 18:00:18.174158   13818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 18:00:18.321523   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 18:00:18.321622   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:18.372889   13818 main.go:141] libmachine: Using SSH client type: native
	I1003 18:00:18.373185   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 58340 <nil> <nil>}
	I1003 18:00:18.373198   13818 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 18:00:19.014994   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-04 01:00:18.319551995 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1003 18:00:19.015033   13818 machine.go:91] provisioned docker machine in 1.78690177s
	I1003 18:00:19.015041   13818 client.go:171] LocalClient.Create took 6.317178891s
	I1003 18:00:19.015085   13818 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-010000" took 6.317312679s
	I1003 18:00:19.015098   13818 start.go:300] post-start starting for "ingress-addon-legacy-010000" (driver="docker")
	I1003 18:00:19.015107   13818 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:00:19.015172   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:00:19.015227   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:19.067389   13818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:00:19.166053   13818 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:00:19.170327   13818 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:00:19.170351   13818 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1003 18:00:19.170358   13818 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1003 18:00:19.170363   13818 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1003 18:00:19.170373   13818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-10413/.minikube/addons for local assets ...
	I1003 18:00:19.170481   13818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-10413/.minikube/files for local assets ...
	I1003 18:00:19.170646   13818 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem -> 108632.pem in /etc/ssl/certs
	I1003 18:00:19.170654   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem -> /etc/ssl/certs/108632.pem
	I1003 18:00:19.170847   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:00:19.180121   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem --> /etc/ssl/certs/108632.pem (1708 bytes)
	I1003 18:00:19.202468   13818 start.go:303] post-start completed in 187.358565ms
	I1003 18:00:19.202980   13818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-010000
	I1003 18:00:19.254790   13818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/config.json ...
	I1003 18:00:19.255218   13818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:00:19.255275   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:19.306520   13818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:00:19.401289   13818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:00:19.406960   13818 start.go:128] duration metric: createHost completed in 6.763337812s
	I1003 18:00:19.406976   13818 start.go:83] releasing machines lock for "ingress-addon-legacy-010000", held for 6.763461076s
	I1003 18:00:19.407071   13818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-010000
	I1003 18:00:19.458337   13818 ssh_runner.go:195] Run: cat /version.json
	I1003 18:00:19.458370   13818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:00:19.458428   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:19.458432   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:19.511249   13818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:00:19.511521   13818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:00:19.605654   13818 ssh_runner.go:195] Run: systemctl --version
	I1003 18:00:19.711231   13818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:00:19.717189   13818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1003 18:00:19.741959   13818 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1003 18:00:19.742023   13818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 18:00:19.759061   13818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 18:00:19.775892   13818 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:00:19.775906   13818 start.go:469] detecting cgroup driver to use...
	I1003 18:00:19.775919   13818 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1003 18:00:19.776035   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:00:19.792358   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1003 18:00:19.803043   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 18:00:19.813887   13818 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 18:00:19.813948   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 18:00:19.824455   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 18:00:19.835152   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 18:00:19.845467   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 18:00:19.855981   13818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:00:19.866283   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 18:00:19.877455   13818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:00:19.887144   13818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:00:19.896160   13818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:00:19.950107   13818 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 18:00:20.042858   13818 start.go:469] detecting cgroup driver to use...
	I1003 18:00:20.042877   13818 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1003 18:00:20.042938   13818 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 18:00:20.066570   13818 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1003 18:00:20.066648   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 18:00:20.079044   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:00:20.096980   13818 ssh_runner.go:195] Run: which cri-dockerd
	I1003 18:00:20.102134   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 18:00:20.113202   13818 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 18:00:20.142990   13818 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 18:00:20.205583   13818 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 18:00:20.291154   13818 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 18:00:20.291256   13818 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 18:00:20.308733   13818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:00:20.395110   13818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 18:00:20.651971   13818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 18:00:20.677561   13818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 18:00:20.749140   13818 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1003 18:00:20.749229   13818 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-010000 dig +short host.docker.internal
	I1003 18:00:20.872835   13818 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1003 18:00:20.872926   13818 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1003 18:00:20.877849   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:00:20.889516   13818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:00:20.940687   13818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:00:20.940757   13818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 18:00:20.961766   13818 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1003 18:00:20.961781   13818 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 18:00:20.961832   13818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 18:00:20.971350   13818 ssh_runner.go:195] Run: which lz4
	I1003 18:00:20.975714   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 18:00:20.975810   13818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 18:00:20.980216   13818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 18:00:20.980234   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1003 18:00:26.841322   13818 docker.go:628] Took 5.865514 seconds to copy over tarball
	I1003 18:00:26.841396   13818 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 18:00:28.823003   13818 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.981470781s)
	I1003 18:00:28.823021   13818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 18:00:28.877942   13818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 18:00:28.887395   13818 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1003 18:00:28.904475   13818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:00:28.961740   13818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 18:00:30.054853   13818 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.09302138s)
	I1003 18:00:30.054963   13818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 18:00:30.075768   13818 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1003 18:00:30.075782   13818 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 18:00:30.075795   13818 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 18:00:30.081929   13818 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:00:30.082520   13818 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:00:30.082590   13818 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:00:30.084273   13818 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1003 18:00:30.084392   13818 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:00:30.084884   13818 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1003 18:00:30.084930   13818 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:00:30.084940   13818 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:00:30.090801   13818 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1003 18:00:30.090967   13818 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:00:30.091109   13818 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:00:30.091310   13818 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:00:30.092552   13818 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1003 18:00:30.092842   13818 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:00:30.092887   13818 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:00:30.092941   13818 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:00:30.796478   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1003 18:00:30.817396   13818 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1003 18:00:30.817434   13818 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1003 18:00:30.817485   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1003 18:00:30.838634   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1003 18:00:30.882543   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:00:30.902138   13818 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1003 18:00:30.902170   13818 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:00:30.902222   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:00:30.923481   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1003 18:00:31.199076   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1003 18:00:31.219284   13818 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1003 18:00:31.219312   13818 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:00:31.219365   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1003 18:00:31.239948   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1003 18:00:31.499751   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:00:31.521371   13818 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1003 18:00:31.521399   13818 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:00:31.521452   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:00:31.542741   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1003 18:00:31.801106   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1003 18:00:31.822141   13818 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1003 18:00:31.822174   13818 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1003 18:00:31.822234   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1003 18:00:31.842019   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1003 18:00:32.126763   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:00:32.147416   13818 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1003 18:00:32.147440   13818 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:00:32.147489   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:00:32.167959   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1003 18:00:32.719470   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:00:32.740844   13818 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1003 18:00:32.740872   13818 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:00:32.740935   13818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:00:32.760332   13818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1003 18:00:32.816296   13818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:00:32.838529   13818 cache_images.go:92] LoadImages completed in 2.762567128s
	W1003 18:00:32.838579   13818 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1003 18:00:32.838665   13818 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 18:00:32.891629   13818 cni.go:84] Creating CNI manager for ""
	I1003 18:00:32.891644   13818 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 18:00:32.891660   13818 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 18:00:32.891682   13818 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-010000 NodeName:ingress-addon-legacy-010000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1003 18:00:32.891789   13818 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-010000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:00:32.891857   13818 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-010000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-010000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 18:00:32.891924   13818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1003 18:00:32.901667   13818 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:00:32.901722   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:00:32.911139   13818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1003 18:00:32.928276   13818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1003 18:00:32.945244   13818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1003 18:00:32.962150   13818 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:00:32.966684   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:00:32.978271   13818 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000 for IP: 192.168.49.2
	I1003 18:00:32.978291   13818 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb9fe941b88919b11a8fdee7e1b27e1674823c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:32.978471   13818 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.key
	I1003 18:00:32.978529   13818 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17345-10413/.minikube/proxy-client-ca.key
	I1003 18:00:32.978578   13818 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.key
	I1003 18:00:32.978589   13818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.crt with IP's: []
	I1003 18:00:33.094595   13818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.crt ...
	I1003 18:00:33.094607   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.crt: {Name:mk8ec7b58b723ae25dbbf4395d415e950efe7559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.094894   13818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.key ...
	I1003 18:00:33.094903   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/client.key: {Name:mkf2973043158c097140473a4740e0a06c2e3ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.095447   13818 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key.dd3b5fb2
	I1003 18:00:33.095464   13818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 18:00:33.198913   13818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt.dd3b5fb2 ...
	I1003 18:00:33.198922   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt.dd3b5fb2: {Name:mkaa618a93399369de396f0c160c2d12568fadb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.199164   13818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key.dd3b5fb2 ...
	I1003 18:00:33.199172   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key.dd3b5fb2: {Name:mk547e1cc97758ec36a0aafa3ab5f8c8a120aaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.199375   13818 certs.go:337] copying /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt
	I1003 18:00:33.199551   13818 certs.go:341] copying /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key
	I1003 18:00:33.199747   13818 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.key
	I1003 18:00:33.199761   13818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.crt with IP's: []
	I1003 18:00:33.279759   13818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.crt ...
	I1003 18:00:33.279768   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.crt: {Name:mk8c58d69552e5b919d1e805cd17c689c7de9382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.279992   13818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.key ...
	I1003 18:00:33.280000   13818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.key: {Name:mkd0ad7b46191397d9563948974d1a8a36a915a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:00:33.280203   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:00:33.280229   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:00:33.280247   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:00:33.280267   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:00:33.280290   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:00:33.280305   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:00:33.280327   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:00:33.280349   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:00:33.280433   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/10863.pem (1338 bytes)
	W1003 18:00:33.280483   13818 certs.go:433] ignoring /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/10863_empty.pem, impossibly tiny 0 bytes
	I1003 18:00:33.280495   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca-key.pem (1675 bytes)
	I1003 18:00:33.280531   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:00:33.280560   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:00:33.280586   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/certs/key.pem (1675 bytes)
	I1003 18:00:33.280651   13818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem (1708 bytes)
	I1003 18:00:33.280685   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/10863.pem -> /usr/share/ca-certificates/10863.pem
	I1003 18:00:33.280707   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem -> /usr/share/ca-certificates/108632.pem
	I1003 18:00:33.280724   13818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:33.281251   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 18:00:33.305288   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:00:33.328656   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:00:33.351584   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/ingress-addon-legacy-010000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 18:00:33.374569   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:00:33.397592   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 18:00:33.420056   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:00:33.443157   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 18:00:33.465663   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/10863.pem --> /usr/share/ca-certificates/10863.pem (1338 bytes)
	I1003 18:00:33.489477   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/ssl/certs/108632.pem --> /usr/share/ca-certificates/108632.pem (1708 bytes)
	I1003 18:00:33.513489   13818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-10413/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:00:33.536099   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:00:33.553427   13818 ssh_runner.go:195] Run: openssl version
	I1003 18:00:33.559328   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10863.pem && ln -fs /usr/share/ca-certificates/10863.pem /etc/ssl/certs/10863.pem"
	I1003 18:00:33.569558   13818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10863.pem
	I1003 18:00:33.574107   13818 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:54 /usr/share/ca-certificates/10863.pem
	I1003 18:00:33.574152   13818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10863.pem
	I1003 18:00:33.580996   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10863.pem /etc/ssl/certs/51391683.0"
	I1003 18:00:33.591161   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108632.pem && ln -fs /usr/share/ca-certificates/108632.pem /etc/ssl/certs/108632.pem"
	I1003 18:00:33.601236   13818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108632.pem
	I1003 18:00:33.605811   13818 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:54 /usr/share/ca-certificates/108632.pem
	I1003 18:00:33.605858   13818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108632.pem
	I1003 18:00:33.612869   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108632.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:00:33.623017   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:00:33.633230   13818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:33.637758   13818 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:50 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:33.637809   13818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:00:33.644864   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:00:33.655006   13818 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 18:00:33.659373   13818 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 18:00:33.659418   13818 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-010000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:00:33.659517   13818 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:00:33.678448   13818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:00:33.688280   13818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:00:33.698041   13818 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:00:33.698133   13818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:00:33.707507   13818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:00:33.707537   13818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:00:33.759043   13818 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1003 18:00:33.759090   13818 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 18:00:34.006920   13818 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:00:34.007016   13818 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:00:34.007119   13818 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 18:00:34.190409   13818 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:00:34.191194   13818 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:00:34.191239   13818 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 18:00:34.271224   13818 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:00:34.292546   13818 out.go:204]   - Generating certificates and keys ...
	I1003 18:00:34.292635   13818 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 18:00:34.292701   13818 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 18:00:34.502928   13818 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:00:34.851395   13818 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:00:34.992425   13818 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:00:35.085591   13818 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 18:00:35.162379   13818 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 18:00:35.162536   13818 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:00:35.242772   13818 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 18:00:35.242918   13818 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:00:35.335907   13818 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:00:35.397418   13818 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:00:35.719107   13818 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 18:00:35.719174   13818 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:00:35.770271   13818 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:00:35.874397   13818 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:00:36.280928   13818 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:00:36.631630   13818 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:00:36.631934   13818 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:00:36.653561   13818 out.go:204]   - Booting up control plane ...
	I1003 18:00:36.653724   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:00:36.653933   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:00:36.654056   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:00:36.654233   13818 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:00:36.654501   13818 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 18:01:16.642150   13818 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1003 18:01:16.642885   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:01:16.643062   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:01:21.644533   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:01:21.644693   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:01:31.645956   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:01:31.646114   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:01:51.648694   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:01:51.648925   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:02:31.651981   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:02:31.652276   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:02:31.652300   13818 kubeadm.go:322] 
	I1003 18:02:31.652349   13818 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1003 18:02:31.652412   13818 kubeadm.go:322] 		timed out waiting for the condition
	I1003 18:02:31.652423   13818 kubeadm.go:322] 
	I1003 18:02:31.652488   13818 kubeadm.go:322] 	This error is likely caused by:
	I1003 18:02:31.652574   13818 kubeadm.go:322] 		- The kubelet is not running
	I1003 18:02:31.652748   13818 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1003 18:02:31.652762   13818 kubeadm.go:322] 
	I1003 18:02:31.652908   13818 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1003 18:02:31.652969   13818 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1003 18:02:31.653007   13818 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1003 18:02:31.653013   13818 kubeadm.go:322] 
	I1003 18:02:31.653158   13818 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1003 18:02:31.653265   13818 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:02:31.653274   13818 kubeadm.go:322] 
	I1003 18:02:31.653343   13818 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1003 18:02:31.653387   13818 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1003 18:02:31.653447   13818 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1003 18:02:31.653491   13818 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1003 18:02:31.653506   13818 kubeadm.go:322] 
	I1003 18:02:31.655431   13818 kubeadm.go:322] W1004 01:00:33.757722    1709 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1003 18:02:31.655618   13818 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1003 18:02:31.655706   13818 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1003 18:02:31.655806   13818 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1003 18:02:31.655911   13818 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:02:31.656017   13818 kubeadm.go:322] W1004 01:00:36.636052    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:02:31.656127   13818 kubeadm.go:322] W1004 01:00:36.636873    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:02:31.656197   13818 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1003 18:02:31.656323   13818 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:02:31.656415   13818 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:00:33.757722    1709 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:00:36.636052    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:00:36.636873    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-010000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:00:33.757722    1709 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:00:36.636052    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:00:36.636873    1709 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:02:31.656452   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1003 18:02:32.073901   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:02:32.085720   13818 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:02:32.085780   13818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:02:32.094901   13818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:02:32.094926   13818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:02:32.146662   13818 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1003 18:02:32.146704   13818 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 18:02:32.397239   13818 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:02:32.397313   13818 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:02:32.397398   13818 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 18:02:32.583938   13818 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:02:32.584479   13818 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:02:32.584515   13818 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 18:02:32.659943   13818 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:02:32.681455   13818 out.go:204]   - Generating certificates and keys ...
	I1003 18:02:32.681544   13818 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 18:02:32.681630   13818 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 18:02:32.681720   13818 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:02:32.681791   13818 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:02:32.681871   13818 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:02:32.681959   13818 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1003 18:02:32.682067   13818 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:02:32.682123   13818 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:02:32.682187   13818 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:02:32.682238   13818 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:02:32.682279   13818 kubeadm.go:322] [certs] Using the existing "sa" key
	I1003 18:02:32.682339   13818 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:02:32.993182   13818 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:02:33.195399   13818 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:02:33.307837   13818 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:02:33.421198   13818 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:02:33.421917   13818 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:02:33.443486   13818 out.go:204]   - Booting up control plane ...
	I1003 18:02:33.443681   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:02:33.443879   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:02:33.443997   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:02:33.444181   13818 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:02:33.444441   13818 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 18:03:13.434119   13818 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1003 18:03:13.435116   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:03:13.435388   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:03:18.437355   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:03:18.437607   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:03:28.438844   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:03:28.439019   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:03:48.441575   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:03:48.441789   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:04:28.444920   13818 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:04:28.445268   13818 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:04:28.445295   13818 kubeadm.go:322] 
	I1003 18:04:28.445374   13818 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1003 18:04:28.445441   13818 kubeadm.go:322] 		timed out waiting for the condition
	I1003 18:04:28.445452   13818 kubeadm.go:322] 
	I1003 18:04:28.445514   13818 kubeadm.go:322] 	This error is likely caused by:
	I1003 18:04:28.445583   13818 kubeadm.go:322] 		- The kubelet is not running
	I1003 18:04:28.445843   13818 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1003 18:04:28.445870   13818 kubeadm.go:322] 
	I1003 18:04:28.446051   13818 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1003 18:04:28.446079   13818 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1003 18:04:28.446102   13818 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1003 18:04:28.446108   13818 kubeadm.go:322] 
	I1003 18:04:28.446183   13818 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1003 18:04:28.446290   13818 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:04:28.446301   13818 kubeadm.go:322] 
	I1003 18:04:28.446420   13818 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1003 18:04:28.446478   13818 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1003 18:04:28.446553   13818 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1003 18:04:28.446588   13818 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1003 18:04:28.446593   13818 kubeadm.go:322] 
	I1003 18:04:28.448496   13818 kubeadm.go:322] W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1003 18:04:28.448661   13818 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1003 18:04:28.448722   13818 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1003 18:04:28.448838   13818 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1003 18:04:28.448933   13818 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:04:28.449026   13818 kubeadm.go:322] W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:04:28.449132   13818 kubeadm.go:322] W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:04:28.449202   13818 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1003 18:04:28.449305   13818 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:04:28.449348   13818 kubeadm.go:406] StartCluster complete in 3m54.782801346s
	I1003 18:04:28.449477   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 18:04:28.470329   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.470343   13818 logs.go:286] No container was found matching "kube-apiserver"
	I1003 18:04:28.470424   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 18:04:28.490457   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.490470   13818 logs.go:286] No container was found matching "etcd"
	I1003 18:04:28.490536   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 18:04:28.510987   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.511001   13818 logs.go:286] No container was found matching "coredns"
	I1003 18:04:28.511065   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 18:04:28.530554   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.530567   13818 logs.go:286] No container was found matching "kube-scheduler"
	I1003 18:04:28.530636   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 18:04:28.550806   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.550821   13818 logs.go:286] No container was found matching "kube-proxy"
	I1003 18:04:28.550888   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 18:04:28.571029   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.571043   13818 logs.go:286] No container was found matching "kube-controller-manager"
	I1003 18:04:28.571114   13818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 18:04:28.590109   13818 logs.go:284] 0 containers: []
	W1003 18:04:28.590122   13818 logs.go:286] No container was found matching "kindnet"
	I1003 18:04:28.590137   13818 logs.go:123] Gathering logs for kubelet ...
	I1003 18:04:28.590144   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:04:28.629270   13818 logs.go:123] Gathering logs for dmesg ...
	I1003 18:04:28.629303   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:04:28.644324   13818 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:04:28.644337   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:04:28.703160   13818 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:04:28.703172   13818 logs.go:123] Gathering logs for Docker ...
	I1003 18:04:28.703179   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 18:04:28.721585   13818 logs.go:123] Gathering logs for container status ...
	I1003 18:04:28.721602   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:04:28.797399   13818 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:04:28.797425   13818 out.go:239] * 
	* 
	W1003 18:04:28.797501   13818 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:04:28.797529   13818 out.go:239] * 
	* 
	W1003 18:04:28.798140   13818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:04:28.860636   13818 out.go:177] 
	W1003 18:04:28.902688   13818 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:02:32.145696    4783 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:02:33.426198    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:02:33.427036    4783 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:04:28.902753   13818 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1003 18:04:28.902782   13818 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1003 18:04:28.923737   13818 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-010000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (265.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (117.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-010000 addons enable ingress --alsologtostderr -v=5
E1003 18:05:31.897766   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-010000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m57.07217181s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:04:29.070044   14145 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:04:29.070265   14145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:04:29.070270   14145 out.go:309] Setting ErrFile to fd 2...
	I1003 18:04:29.070274   14145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:04:29.070458   14145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:04:29.071146   14145 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:04:29.071167   14145 addons.go:594] checking whether the cluster is paused
	I1003 18:04:29.071247   14145 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:04:29.071268   14145 host.go:66] Checking if "ingress-addon-legacy-010000" exists ...
	I1003 18:04:29.071655   14145 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:04:29.123245   14145 ssh_runner.go:195] Run: systemctl --version
	I1003 18:04:29.123350   14145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:04:29.174431   14145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:04:29.266174   14145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:04:29.308549   14145 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1003 18:04:29.329603   14145 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:04:29.329620   14145 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-010000"
	I1003 18:04:29.329629   14145 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-010000"
	I1003 18:04:29.329687   14145 host.go:66] Checking if "ingress-addon-legacy-010000" exists ...
	I1003 18:04:29.330091   14145 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:04:29.402374   14145 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1003 18:04:29.425208   14145 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1003 18:04:29.445391   14145 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1003 18:04:29.466248   14145 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1003 18:04:29.487634   14145 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:04:29.487653   14145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1003 18:04:29.487735   14145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:04:29.539503   14145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:04:29.645012   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:29.699440   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:29.699466   14145 retry.go:31] will retry after 365.928198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:30.067050   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:30.123788   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:30.123809   14145 retry.go:31] will retry after 550.525594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:30.676156   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:30.733325   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:30.733343   14145 retry.go:31] will retry after 719.389308ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:31.453924   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:31.509757   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:31.509777   14145 retry.go:31] will retry after 507.715816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:32.017986   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:32.074619   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:32.074637   14145 retry.go:31] will retry after 826.69393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:32.903024   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:32.958346   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:32.958363   14145 retry.go:31] will retry after 966.923371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:33.925529   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:33.982676   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:33.982702   14145 retry.go:31] will retry after 3.288224047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:37.272219   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:37.330829   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:37.330854   14145 retry.go:31] will retry after 4.590714244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:41.922673   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:41.980578   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:41.980595   14145 retry.go:31] will retry after 7.732166294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:49.713186   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:49.770186   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:49.770214   14145 retry.go:31] will retry after 5.908339145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:55.679295   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:04:55.737521   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:04:55.737542   14145 retry.go:31] will retry after 19.803906607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:15.542663   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:05:15.597337   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:15.597354   14145 retry.go:31] will retry after 16.576194598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:32.176309   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:05:32.233616   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:32.233639   14145 retry.go:31] will retry after 25.352375385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:57.587661   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:05:57.641242   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:05:57.641259   14145 retry.go:31] will retry after 28.28859281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:25.932703   14145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:06:25.988830   14145 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:25.988856   14145 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-010000"
	I1003 18:06:26.010288   14145 out.go:177] * Verifying ingress addon...
	I1003 18:06:26.032482   14145 out.go:177] 
	W1003 18:06:26.053978   14145 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-010000" does not exist: client config: context "ingress-addon-legacy-010000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-010000" does not exist: client config: context "ingress-addon-legacy-010000" does not exist]
	W1003 18:06:26.054007   14145 out.go:239] * 
	* 
	W1003 18:06:26.058688   14145 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:06:26.080286   14145 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-010000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-010000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced",
	        "Created": "2023-10-04T01:00:16.401087095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:00:16.622865703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hostname",
	        "HostsPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hosts",
	        "LogPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced-json.log",
	        "Name": "/ingress-addon-legacy-010000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-010000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-010000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc-init/diff:/var/lib/docker/overlay2/c197ab651fd344a0d3b26c32e82540cbbd2d6bdc403805474860224a6c52d5a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-010000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-010000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-010000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6798f1fde3e1d3555b3389895d79072b3ec2a99b07d140deb822cba50defa4ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58339"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6798f1fde3e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-010000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5497522c2be7",
	                        "ingress-addon-legacy-010000"
	                    ],
	                    "NetworkID": "750c9e68d0af8a312468ead06bea1674d34d1269c493b946320b0c65e2cd5006",
	                    "EndpointID": "023c027793a77a378813dbd29a78b92f1a3844dbb33a24ea05d88baec62d1faa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000: exit status 6 (373.889475ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:06:26.521842   14203 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-010000" does not appear in /Users/jenkins/minikube-integration/17345-10413/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-010000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (117.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-010000 addons enable ingress-dns --alsologtostderr -v=5
E1003 18:06:59.783949   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:07:47.998258   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:08:15.744094   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-010000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m53.063768227s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:06:26.575190   14213 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:06:26.575488   14213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:26.575494   14213 out.go:309] Setting ErrFile to fd 2...
	I1003 18:06:26.575498   14213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:26.575684   14213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:06:26.576321   14213 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:06:26.576340   14213 addons.go:594] checking whether the cluster is paused
	I1003 18:06:26.576415   14213 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:06:26.576436   14213 host.go:66] Checking if "ingress-addon-legacy-010000" exists ...
	I1003 18:06:26.576865   14213 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:06:26.627382   14213 ssh_runner.go:195] Run: systemctl --version
	I1003 18:06:26.627493   14213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:06:26.678066   14213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:06:26.771975   14213 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:06:26.813655   14213 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1003 18:06:26.834804   14213 config.go:182] Loaded profile config "ingress-addon-legacy-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:06:26.834831   14213 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-010000"
	I1003 18:06:26.834843   14213 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-010000"
	I1003 18:06:26.834926   14213 host.go:66] Checking if "ingress-addon-legacy-010000" exists ...
	I1003 18:06:26.835495   14213 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-010000 --format={{.State.Status}}
	I1003 18:06:26.907427   14213 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1003 18:06:26.928712   14213 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1003 18:06:26.950639   14213 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:06:26.950673   14213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1003 18:06:26.950816   14213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-010000
	I1003 18:06:27.003188   14213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58340 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/ingress-addon-legacy-010000/id_rsa Username:docker}
	I1003 18:06:27.108030   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:27.162179   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:27.162207   14213 retry.go:31] will retry after 236.35188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:27.400560   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:27.458260   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:27.458280   14213 retry.go:31] will retry after 325.266786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:27.785858   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:27.842022   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:27.842044   14213 retry.go:31] will retry after 471.580157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:28.316015   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:28.371359   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:28.371386   14213 retry.go:31] will retry after 927.065671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:29.300794   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:29.359333   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:29.359350   14213 retry.go:31] will retry after 1.246241414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:30.608058   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:30.666920   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:30.666942   14213 retry.go:31] will retry after 2.021993643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:32.690984   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:32.748302   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:32.748322   14213 retry.go:31] will retry after 2.531002976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:35.280953   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:35.348148   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:35.348165   14213 retry.go:31] will retry after 3.033939258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:38.384020   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:38.440851   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:38.440872   14213 retry.go:31] will retry after 8.12665975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:46.568893   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:46.624319   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:46.624336   14213 retry.go:31] will retry after 10.790237375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:57.417060   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:06:57.473221   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:06:57.473238   14213 retry.go:31] will retry after 19.569253828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:07:17.045731   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:07:17.102174   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:07:17.102192   14213 retry.go:31] will retry after 27.96397982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:07:45.067544   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:07:45.124522   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:07:45.124540   14213 retry.go:31] will retry after 34.325549155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:08:19.453553   14213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:08:19.511011   14213 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:08:19.532827   14213 out.go:177] 
	W1003 18:08:19.553628   14213 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1003 18:08:19.553667   14213 out.go:239] * 
	* 
	W1003 18:08:19.558321   14213 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:08:19.579511   14213 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-010000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-010000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced",
	        "Created": "2023-10-04T01:00:16.401087095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:00:16.622865703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hostname",
	        "HostsPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hosts",
	        "LogPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced-json.log",
	        "Name": "/ingress-addon-legacy-010000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-010000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-010000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc-init/diff:/var/lib/docker/overlay2/c197ab651fd344a0d3b26c32e82540cbbd2d6bdc403805474860224a6c52d5a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-010000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-010000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-010000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6798f1fde3e1d3555b3389895d79072b3ec2a99b07d140deb822cba50defa4ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58339"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6798f1fde3e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-010000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5497522c2be7",
	                        "ingress-addon-legacy-010000"
	                    ],
	                    "NetworkID": "750c9e68d0af8a312468ead06bea1674d34d1269c493b946320b0c65e2cd5006",
	                    "EndpointID": "023c027793a77a378813dbd29a78b92f1a3844dbb33a24ea05d88baec62d1faa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000: exit status 6 (373.894937ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:08:20.019214   14262 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-010000" does not appear in /Users/jenkins/minikube-integration/17345-10413/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-010000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:179: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-010000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-010000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced",
	        "Created": "2023-10-04T01:00:16.401087095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:00:16.622865703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hostname",
	        "HostsPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/hosts",
	        "LogPath": "/var/lib/docker/containers/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced/5497522c2be72d94eb1054300c741caacb444d8a69a5ab0dfd33461a136adced-json.log",
	        "Name": "/ingress-addon-legacy-010000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-010000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-010000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc-init/diff:/var/lib/docker/overlay2/c197ab651fd344a0d3b26c32e82540cbbd2d6bdc403805474860224a6c52d5a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e345038e88866e8877c4b3486a3e8a8da49ccb4c6f7b3717d04a3dda81cc0dbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-010000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-010000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-010000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-010000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6798f1fde3e1d3555b3389895d79072b3ec2a99b07d140deb822cba50defa4ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58339"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6798f1fde3e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-010000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5497522c2be7",
	                        "ingress-addon-legacy-010000"
	                    ],
	                    "NetworkID": "750c9e68d0af8a312468ead06bea1674d34d1269c493b946320b0c65e2cd5006",
	                    "EndpointID": "023c027793a77a378813dbd29a78b92f1a3844dbb33a24ea05d88baec62d1faa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-010000 -n ingress-addon-legacy-010000: exit status 6 (390.956444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:08:20.462952   14274 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-010000" does not appear in /Users/jenkins/minikube-integration/17345-10413/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-010000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (872.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-772000 ssh -- ls /minikube-host
E1003 18:13:22.849205   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:16:59.804536   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:17:48.020627   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:19:11.129513   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:21:59.816828   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:22:48.031079   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:26:59.829492   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-772000 ssh -- ls /minikube-host: signal: killed (14m31.831423745s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-772000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-772000
helpers_test.go:235: (dbg) docker inspect mount-start-2-772000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e",
	        "Created": "2023-10-04T01:12:53.507136676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 107604,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:13:04.760050435Z",
	            "FinishedAt": "2023-10-04T01:13:02.443629496Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e/hosts",
	        "LogPath": "/var/lib/docker/containers/9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e/9c04b6844724005fc041874babdbb562f4688066e650a97d00d8ca42f17efe5e-json.log",
	        "Name": "/mount-start-2-772000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-772000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-772000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4866b8bc096e5f70d10dfb219b811c72ae42f4b50b269b808999a99d77f5db2-init/diff:/var/lib/docker/overlay2/c197ab651fd344a0d3b26c32e82540cbbd2d6bdc403805474860224a6c52d5a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4866b8bc096e5f70d10dfb219b811c72ae42f4b50b269b808999a99d77f5db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4866b8bc096e5f70d10dfb219b811c72ae42f4b50b269b808999a99d77f5db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4866b8bc096e5f70d10dfb219b811c72ae42f4b50b269b808999a99d77f5db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-772000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-772000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-772000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-772000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-772000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe0e063963cb4045155e33275b5f16df1832e6768c9a23842bf137a38281bab0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58645"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58646"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58647"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58648"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58649"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fe0e063963cb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-772000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9c04b6844724",
	                        "mount-start-2-772000"
	                    ],
	                    "NetworkID": "10af3b9eeff0d9f2830bcb75792db79aac39d72c115058d1e77cdc0c16de8bb6",
	                    "EndpointID": "14195447e6902e56ff4a05b96724769182f5a08a523f9174a850cf2c4a018226",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-772000 -n mount-start-2-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-772000 -n mount-start-2-772000: exit status 6 (369.950066ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:27:44.550403   16251 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-772000" does not appear in /Users/jenkins/minikube-integration/17345-10413/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-772000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (872.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (758.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-530000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1003 18:30:02.889467   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:31:59.842296   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:32:48.058511   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:35:51.133722   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:36:59.814119   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:37:48.029079   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-530000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m38.158352377s)

                                                
                                                
-- stdout --
	* [multinode-530000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-530000 in cluster multinode-530000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-530000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:28:53.626216   16366 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:28:53.626415   16366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:28:53.626422   16366 out.go:309] Setting ErrFile to fd 2...
	I1003 18:28:53.626425   16366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:28:53.627174   16366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:28:53.629041   16366 out.go:303] Setting JSON to false
	I1003 18:28:53.651217   16366 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7101,"bootTime":1696375832,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 18:28:53.651334   16366 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:28:53.673340   16366 out.go:177] * [multinode-530000] minikube v1.31.2 on Darwin 14.0
	I1003 18:28:53.715302   16366 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 18:28:53.715350   16366 notify.go:220] Checking for updates...
	I1003 18:28:53.758961   16366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 18:28:53.801180   16366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:28:53.822103   16366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:28:53.843230   16366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 18:28:53.864315   16366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:28:53.885760   16366 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:28:53.943953   16366 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:28:53.944106   16366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:28:54.045780   16366 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-04 01:28:54.034097832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:28:54.089593   16366 out.go:177] * Using the docker driver based on user configuration
	I1003 18:28:54.110319   16366 start.go:298] selected driver: docker
	I1003 18:28:54.110350   16366 start.go:902] validating driver "docker" against <nil>
	I1003 18:28:54.110369   16366 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:28:54.114387   16366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:28:54.212114   16366 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-04 01:28:54.201705305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:28:54.212281   16366 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 18:28:54.212516   16366 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:28:54.234228   16366 out.go:177] * Using Docker Desktop driver with root privileges
	I1003 18:28:54.255922   16366 cni.go:84] Creating CNI manager for ""
	I1003 18:28:54.255949   16366 cni.go:136] 0 nodes found, recommending kindnet
	I1003 18:28:54.255963   16366 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:28:54.255983   16366 start_flags.go:321] config:
	{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:28:54.298893   16366 out.go:177] * Starting control plane node multinode-530000 in cluster multinode-530000
	I1003 18:28:54.320972   16366 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:28:54.342669   16366 out.go:177] * Pulling base image ...
	I1003 18:28:54.384823   16366 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:28:54.384874   16366 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 18:28:54.384914   16366 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 18:28:54.384932   16366 cache.go:57] Caching tarball of preloaded images
	I1003 18:28:54.385143   16366 preload.go:174] Found /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 18:28:54.385166   16366 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 18:28:54.386807   16366 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/multinode-530000/config.json ...
	I1003 18:28:54.386910   16366 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/multinode-530000/config.json: {Name:mk446c966b53e47169a410aad6e6a4b48cc3e6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:28:54.439094   16366 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 18:28:54.439109   16366 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 18:28:54.439134   16366 cache.go:195] Successfully downloaded all kic artifacts
	I1003 18:28:54.439166   16366 start.go:365] acquiring machines lock for multinode-530000: {Name:mk882a18747e7f7b67f76932b37aedc8a7b77799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:28:54.439325   16366 start.go:369] acquired machines lock for "multinode-530000" in 148.388µs
	I1003 18:28:54.439350   16366 start.go:93] Provisioning new machine with config: &{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 18:28:54.439429   16366 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:28:54.462721   16366 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 18:28:54.463049   16366 start.go:159] libmachine.API.Create for "multinode-530000" (driver="docker")
	I1003 18:28:54.463095   16366 client.go:168] LocalClient.Create starting
	I1003 18:28:54.463274   16366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:28:54.463362   16366 main.go:141] libmachine: Decoding PEM data...
	I1003 18:28:54.463396   16366 main.go:141] libmachine: Parsing certificate...
	I1003 18:28:54.463548   16366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:28:54.463616   16366 main.go:141] libmachine: Decoding PEM data...
	I1003 18:28:54.463633   16366 main.go:141] libmachine: Parsing certificate...
	I1003 18:28:54.464499   16366 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:28:54.515612   16366 cli_runner.go:211] docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:28:54.515711   16366 network_create.go:281] running [docker network inspect multinode-530000] to gather additional debugging logs...
	I1003 18:28:54.515726   16366 cli_runner.go:164] Run: docker network inspect multinode-530000
	W1003 18:28:54.566234   16366 cli_runner.go:211] docker network inspect multinode-530000 returned with exit code 1
	I1003 18:28:54.566261   16366 network_create.go:284] error running [docker network inspect multinode-530000]: docker network inspect multinode-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-530000 not found
	I1003 18:28:54.566272   16366 network_create.go:286] output of [docker network inspect multinode-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-530000 not found
	
	** /stderr **
	I1003 18:28:54.566400   16366 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:28:54.618265   16366 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:28:54.618634   16366 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fb6910}
	I1003 18:28:54.618649   16366 network_create.go:124] attempt to create docker network multinode-530000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1003 18:28:54.618712   16366 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	I1003 18:28:54.706363   16366 network_create.go:108] docker network multinode-530000 192.168.58.0/24 created
	I1003 18:28:54.706407   16366 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-530000" container
	I1003 18:28:54.706530   16366 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:28:54.757394   16366 cli_runner.go:164] Run: docker volume create multinode-530000 --label name.minikube.sigs.k8s.io=multinode-530000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:28:54.809605   16366 oci.go:103] Successfully created a docker volume multinode-530000
	I1003 18:28:54.809729   16366 cli_runner.go:164] Run: docker run --rm --name multinode-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-530000 --entrypoint /usr/bin/test -v multinode-530000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:28:55.220797   16366 oci.go:107] Successfully prepared a docker volume multinode-530000
	I1003 18:28:55.220828   16366 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:28:55.220843   16366 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:28:55.220942   16366 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-530000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:34:54.448613   16366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:34:54.448725   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:54.504532   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:54.504643   16366 retry.go:31] will retry after 197.810711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:54.702931   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:54.756345   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:54.756452   16366 retry.go:31] will retry after 219.265788ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:54.977998   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:55.031857   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:55.031975   16366 retry.go:31] will retry after 615.174909ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:55.648492   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:55.703159   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:34:55.703251   16366 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:34:55.703274   16366 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:55.703339   16366 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:34:55.703395   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:55.753394   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:55.753490   16366 retry.go:31] will retry after 204.20972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:55.960061   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:56.013893   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:56.013982   16366 retry.go:31] will retry after 257.1009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:56.272392   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:56.326328   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:56.326414   16366 retry.go:31] will retry after 759.451091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:57.087918   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:57.139163   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:34:57.139249   16366 retry.go:31] will retry after 495.562614ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:57.636882   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:34:57.691470   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:34:57.691580   16366 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:34:57.691603   16366 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:34:57.691614   16366 start.go:128] duration metric: createHost completed in 6m3.268940647s
	I1003 18:34:57.691620   16366 start.go:83] releasing machines lock for "multinode-530000", held for 6m3.269051227s
	W1003 18:34:57.691634   16366 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1003 18:34:57.692047   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:34:57.741642   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:34:57.741688   16366 delete.go:82] Unable to get host status for multinode-530000, assuming it has already been deleted: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	W1003 18:34:57.741767   16366 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1003 18:34:57.741777   16366 start.go:703] Will try again in 5 seconds ...
	I1003 18:35:02.740510   16366 start.go:365] acquiring machines lock for multinode-530000: {Name:mk882a18747e7f7b67f76932b37aedc8a7b77799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:35:02.740636   16366 start.go:369] acquired machines lock for "multinode-530000" in 78.65µs
	I1003 18:35:02.740656   16366 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:35:02.740663   16366 fix.go:54] fixHost starting: 
	I1003 18:35:02.740905   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:02.793070   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:02.793111   16366 fix.go:102] recreateIfNeeded on multinode-530000: state= err=unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:02.793151   16366 fix.go:107] machineExists: false. err=machine does not exist
	I1003 18:35:02.815099   16366 out.go:177] * docker "multinode-530000" container is missing, will recreate.
	I1003 18:35:02.858274   16366 delete.go:124] DEMOLISHING multinode-530000 ...
	I1003 18:35:02.858389   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:02.908950   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:35:02.909002   16366 stop.go:75] unable to get state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:02.909030   16366 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:02.909400   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:02.959307   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:02.959371   16366 delete.go:82] Unable to get host status for multinode-530000, assuming it has already been deleted: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:02.959451   16366 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:35:03.009689   16366 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:35:03.009720   16366 kic.go:367] could not find the container multinode-530000 to remove it. will try anyways
	I1003 18:35:03.009797   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:03.118635   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:35:03.118677   16366 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:03.118773   16366 cli_runner.go:164] Run: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0"
	W1003 18:35:03.169573   16366 cli_runner.go:211] docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 18:35:03.169606   16366 oci.go:647] error shutdown multinode-530000: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:04.170745   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:04.224373   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:04.224415   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:04.224430   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:04.224450   16366 retry.go:31] will retry after 264.849608ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:04.491648   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:04.545407   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:04.545450   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:04.545463   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:04.545484   16366 retry.go:31] will retry after 410.23824ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:04.957439   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:05.014016   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:05.014055   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:05.014067   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:05.014087   16366 retry.go:31] will retry after 1.441506706s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:06.456552   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:06.508142   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:06.508192   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:06.508201   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:06.508226   16366 retry.go:31] will retry after 2.337520713s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:08.846448   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:08.897818   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:08.897858   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:08.897872   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:08.897894   16366 retry.go:31] will retry after 3.722714864s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:12.621246   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:12.675086   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:12.675129   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:12.675139   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:12.675158   16366 retry.go:31] will retry after 5.313960848s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:17.988855   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:18.041504   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:18.041567   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:18.041587   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:18.041618   16366 retry.go:31] will retry after 5.444568011s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:23.487479   16366 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:35:23.539729   16366 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:35:23.539772   16366 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:35:23.539784   16366 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:35:23.539809   16366 oci.go:88] couldn't shut down multinode-530000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	 
	I1003 18:35:23.539889   16366 cli_runner.go:164] Run: docker rm -f -v multinode-530000
	I1003 18:35:23.590190   16366 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:35:23.640126   16366 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:35:23.640226   16366 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:35:23.690675   16366 cli_runner.go:164] Run: docker network rm multinode-530000
	I1003 18:35:23.787826   16366 fix.go:114] Sleeping 1 second for extra luck!
	I1003 18:35:24.788962   16366 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:35:24.811228   16366 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 18:35:24.811417   16366 start.go:159] libmachine.API.Create for "multinode-530000" (driver="docker")
	I1003 18:35:24.811447   16366 client.go:168] LocalClient.Create starting
	I1003 18:35:24.811638   16366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:35:24.811720   16366 main.go:141] libmachine: Decoding PEM data...
	I1003 18:35:24.811748   16366 main.go:141] libmachine: Parsing certificate...
	I1003 18:35:24.811828   16366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:35:24.811888   16366 main.go:141] libmachine: Decoding PEM data...
	I1003 18:35:24.811937   16366 main.go:141] libmachine: Parsing certificate...
	I1003 18:35:24.812640   16366 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:35:24.868008   16366 cli_runner.go:211] docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:35:24.868101   16366 network_create.go:281] running [docker network inspect multinode-530000] to gather additional debugging logs...
	I1003 18:35:24.868120   16366 cli_runner.go:164] Run: docker network inspect multinode-530000
	W1003 18:35:24.918694   16366 cli_runner.go:211] docker network inspect multinode-530000 returned with exit code 1
	I1003 18:35:24.918721   16366 network_create.go:284] error running [docker network inspect multinode-530000]: docker network inspect multinode-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-530000 not found
	I1003 18:35:24.918735   16366 network_create.go:286] output of [docker network inspect multinode-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-530000 not found
	
	** /stderr **
	I1003 18:35:24.918891   16366 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:35:24.970757   16366 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:35:24.972172   16366 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:35:24.972539   16366 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f03fa0}
	I1003 18:35:24.972553   16366 network_create.go:124] attempt to create docker network multinode-530000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1003 18:35:24.972618   16366 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	W1003 18:35:25.021971   16366 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000 returned with exit code 1
	W1003 18:35:25.022008   16366 network_create.go:149] failed to create docker network multinode-530000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1003 18:35:25.022029   16366 network_create.go:116] failed to create docker network multinode-530000 192.168.67.0/24, will retry: subnet is taken
	I1003 18:35:25.023507   16366 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:35:25.023966   16366 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000df9770}
	I1003 18:35:25.023977   16366 network_create.go:124] attempt to create docker network multinode-530000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1003 18:35:25.024044   16366 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	I1003 18:35:25.109496   16366 network_create.go:108] docker network multinode-530000 192.168.76.0/24 created
	I1003 18:35:25.109527   16366 kic.go:117] calculated static IP "192.168.76.2" for the "multinode-530000" container
	I1003 18:35:25.109672   16366 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:35:25.160651   16366 cli_runner.go:164] Run: docker volume create multinode-530000 --label name.minikube.sigs.k8s.io=multinode-530000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:35:25.210379   16366 oci.go:103] Successfully created a docker volume multinode-530000
	I1003 18:35:25.210494   16366 cli_runner.go:164] Run: docker run --rm --name multinode-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-530000 --entrypoint /usr/bin/test -v multinode-530000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:35:25.525190   16366 oci.go:107] Successfully prepared a docker volume multinode-530000
	I1003 18:35:25.525217   16366 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:35:25.525241   16366 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:35:25.525330   16366 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-530000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:41:24.823927   16366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:24.824051   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:24.878807   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:24.878939   16366 retry.go:31] will retry after 344.245078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:25.223807   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:25.278171   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:25.278269   16366 retry.go:31] will retry after 486.632762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:25.766688   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:25.821920   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:25.822026   16366 retry.go:31] will retry after 463.432834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:26.287920   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:26.340384   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:41:26.340485   16366 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:41:26.340507   16366 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:26.340557   16366 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:26.340621   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:26.390472   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:26.390570   16366 retry.go:31] will retry after 358.362496ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:26.751149   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:26.806183   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:26.806275   16366 retry.go:31] will retry after 396.3616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:27.203332   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:27.257117   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:27.257215   16366 retry.go:31] will retry after 313.549854ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:27.573184   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:27.626284   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:27.626391   16366 retry.go:31] will retry after 704.216185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:28.333041   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:28.386932   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:41:28.387041   16366 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:41:28.387061   16366 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:28.387070   16366 start.go:128] duration metric: createHost completed in 6m3.585918696s
	I1003 18:41:28.387133   16366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:41:28.387199   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:28.438534   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:28.438615   16366 retry.go:31] will retry after 351.385508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:28.792399   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:28.846048   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:28.846137   16366 retry.go:31] will retry after 288.817873ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:29.136690   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:29.190850   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:29.190942   16366 retry.go:31] will retry after 682.356462ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:29.875647   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:29.928508   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:41:29.928608   16366 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:41:29.928632   16366 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:29.928709   16366 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:41:29.928771   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:29.978307   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:29.978406   16366 retry.go:31] will retry after 251.911536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:30.230675   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:30.283629   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:30.283720   16366 retry.go:31] will retry after 219.933975ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:30.506047   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:30.558534   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:30.558621   16366 retry.go:31] will retry after 374.70597ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:30.935737   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:30.990165   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:41:30.990247   16366 retry.go:31] will retry after 513.26495ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:31.505893   16366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:41:31.560365   16366 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:41:31.560470   16366 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:41:31.560488   16366 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:41:31.560498   16366 fix.go:56] fixHost completed within 6m28.811451995s
	I1003 18:41:31.560506   16366 start.go:83] releasing machines lock for "multinode-530000", held for 6m28.811478901s
	W1003 18:41:31.560582   16366 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-530000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-530000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1003 18:41:31.604060   16366 out.go:177] 
	W1003 18:41:31.626124   16366 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1003 18:41:31.626180   16366 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1003 18:41:31.626216   16366 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1003 18:41:31.668986   16366 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-530000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.791968ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:41:31.893869   16743 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (758.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (78.331359ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-530000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- rollout status deployment/busybox: exit status 1 (78.253439ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (78.822088ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (78.846182ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.008093ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.99549ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (79.37192ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.821175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.844981ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1003 18:41:59.824968   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.161445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (85.751927ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1003 18:42:48.039253   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (85.105742ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.763296ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (78.471079ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.io: exit status 1 (79.269074ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.default: exit status 1 (78.339302ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (78.088943ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.024995ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:21.292802   16817 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-530000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (77.72939ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-530000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.901511ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:21.519699   16826 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-530000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-530000 -v 3 --alsologtostderr: exit status 80 (186.338321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:43:21.562077   16830 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:43:21.562369   16830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:21.562374   16830 out.go:309] Setting ErrFile to fd 2...
	I1003 18:43:21.562378   16830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:21.562591   16830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:43:21.562940   16830 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:43:21.563234   16830 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:43:21.563650   16830 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:21.614360   16830 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:21.636998   16830 out.go:177] 
	W1003 18:43:21.658698   16830 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:43:21.658725   16830 out.go:239] * 
	* 
	W1003 18:43:21.663499   16830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:43:21.684622   16830 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-530000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.17568ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:21.854510   16836 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:155: expected profile "multinode-530000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-772000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-530000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-530000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-530000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (92.427733ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:22.168958   16848 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status --output json --alsologtostderr: exit status 7 (92.474959ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-530000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:43:22.210117   16852 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:43:22.210405   16852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.210410   16852 out.go:309] Setting ErrFile to fd 2...
	I1003 18:43:22.210414   16852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.210607   16852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:43:22.210789   16852 out.go:303] Setting JSON to true
	I1003 18:43:22.210811   16852 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:43:22.210845   16852 notify.go:220] Checking for updates...
	I1003 18:43:22.211088   16852 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:43:22.211102   16852 status.go:255] checking status of multinode-530000 ...
	I1003 18:43:22.211521   16852 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:22.261477   16852 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:22.261531   16852 status.go:330] multinode-530000 host status = "" (err=state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	)
	I1003 18:43:22.261554   16852 status.go:257] multinode-530000 status: &{Name:multinode-530000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1003 18:43:22.261570   16852 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:43:22.261578   16852 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-530000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.907857ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:22.411080   16858 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 node stop m03: exit status 85 (133.348987ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-530000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status: exit status 7 (94.348383ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:22.640103   16864 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:43:22.640113   16864 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr: exit status 7 (92.848361ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:43:22.681952   16868 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:43:22.682230   16868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.682235   16868 out.go:309] Setting ErrFile to fd 2...
	I1003 18:43:22.682239   16868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.682421   16868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:43:22.682609   16868 out.go:303] Setting JSON to false
	I1003 18:43:22.682633   16868 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:43:22.682684   16868 notify.go:220] Checking for updates...
	I1003 18:43:22.682909   16868 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:43:22.682924   16868 status.go:255] checking status of multinode-530000 ...
	I1003 18:43:22.683341   16868 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:22.733230   16868 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:22.733277   16868 status.go:330] multinode-530000 host status = "" (err=state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	)
	I1003 18:43:22.733296   16868 status.go:257] multinode-530000 status: &{Name:multinode-530000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1003 18:43:22.733312   16868 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:43:22.733318   16868 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr": multinode-530000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:233: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr": multinode-530000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:237: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr": multinode-530000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.890858ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:22.882638   16874 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 node start m03 --alsologtostderr: exit status 85 (132.855536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:43:22.981478   16880 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:43:22.981768   16880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.981773   16880 out.go:309] Setting ErrFile to fd 2...
	I1003 18:43:22.981777   16880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:22.981969   16880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:43:22.982307   16880 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:43:22.982574   16880 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:43:23.003336   16880 out.go:177] 
	W1003 18:43:23.024653   16880 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1003 18:43:23.024678   16880 out.go:239] * 
	* 
	W1003 18:43:23.029573   16880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:43:23.050462   16880 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1003 18:43:22.981478   16880 out.go:296] Setting OutFile to fd 1 ...
I1003 18:43:22.981768   16880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:43:22.981773   16880 out.go:309] Setting ErrFile to fd 2...
I1003 18:43:22.981777   16880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:43:22.981969   16880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 18:43:22.982307   16880 mustload.go:65] Loading cluster: multinode-530000
I1003 18:43:22.982574   16880 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:43:23.003336   16880 out.go:177] 
W1003 18:43:23.024653   16880 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1003 18:43:23.024678   16880 out.go:239] * 
* 
W1003 18:43:23.029573   16880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 18:43:23.050462   16880 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-530000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status: exit status 7 (93.942663ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:23.166783   16882 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:43:23.166795   16882 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-530000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "61017579b0e3b1cdc64cdb1f0abb4827108aa940197529a43bae83b3c0da8ad4",
	        "Created": "2023-10-04T01:35:25.071011677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.90977ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:43:23.314784   16888 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (795.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-530000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-530000
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-530000: exit status 82 (14.044861794s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-530000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-530000" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-530000 --wait=true -v=8 --alsologtostderr
E1003 18:46:42.888801   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:46:59.835560   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:47:48.050999   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:51:59.846758   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:52:31.170153   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 18:52:48.061996   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-530000 --wait=true -v=8 --alsologtostderr: exit status 52 (13m1.429841596s)

                                                
                                                
-- stdout --
	* [multinode-530000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-530000 in cluster multinode-530000
	* Pulling base image ...
	* docker "multinode-530000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-530000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:43:37.444112   16913 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:43:37.444397   16913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:37.444402   16913 out.go:309] Setting ErrFile to fd 2...
	I1003 18:43:37.444407   16913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:43:37.444584   16913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:43:37.445908   16913 out.go:303] Setting JSON to false
	I1003 18:43:37.467965   16913 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7985,"bootTime":1696375832,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 18:43:37.468066   16913 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:43:37.489948   16913 out.go:177] * [multinode-530000] minikube v1.31.2 on Darwin 14.0
	I1003 18:43:37.533143   16913 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 18:43:37.533229   16913 notify.go:220] Checking for updates...
	I1003 18:43:37.576800   16913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 18:43:37.597862   16913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:43:37.619813   16913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:43:37.662966   16913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 18:43:37.688673   16913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:43:37.710119   16913 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:43:37.710289   16913 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:43:37.767699   16913 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:43:37.767838   16913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:43:37.869703   16913 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:90 SystemTime:2023-10-04 01:43:37.858154594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:43:37.891071   16913 out.go:177] * Using the docker driver based on existing profile
	I1003 18:43:37.912035   16913 start.go:298] selected driver: docker
	I1003 18:43:37.912084   16913 start.go:902] validating driver "docker" against &{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:43:37.912200   16913 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:43:37.912399   16913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:43:38.013697   16913 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:90 SystemTime:2023-10-04 01:43:38.002662817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:43:38.016703   16913 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:43:38.016742   16913 cni.go:84] Creating CNI manager for ""
	I1003 18:43:38.016750   16913 cni.go:136] 1 nodes found, recommending kindnet
	I1003 18:43:38.016759   16913 start_flags.go:321] config:
	{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:43:38.059308   16913 out.go:177] * Starting control plane node multinode-530000 in cluster multinode-530000
	I1003 18:43:38.080233   16913 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:43:38.101169   16913 out.go:177] * Pulling base image ...
	I1003 18:43:38.143453   16913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:43:38.143509   16913 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 18:43:38.143533   16913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 18:43:38.143558   16913 cache.go:57] Caching tarball of preloaded images
	I1003 18:43:38.143745   16913 preload.go:174] Found /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 18:43:38.143766   16913 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 18:43:38.144231   16913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/multinode-530000/config.json ...
	I1003 18:43:38.195275   16913 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 18:43:38.195289   16913 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 18:43:38.195312   16913 cache.go:195] Successfully downloaded all kic artifacts
	I1003 18:43:38.195359   16913 start.go:365] acquiring machines lock for multinode-530000: {Name:mk882a18747e7f7b67f76932b37aedc8a7b77799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:43:38.195444   16913 start.go:369] acquired machines lock for "multinode-530000" in 64.42µs
	I1003 18:43:38.195467   16913 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:43:38.195480   16913 fix.go:54] fixHost starting: 
	I1003 18:43:38.195695   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:38.246310   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:38.246361   16913 fix.go:102] recreateIfNeeded on multinode-530000: state= err=unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:38.246387   16913 fix.go:107] machineExists: false. err=machine does not exist
	I1003 18:43:38.288527   16913 out.go:177] * docker "multinode-530000" container is missing, will recreate.
	I1003 18:43:38.309787   16913 delete.go:124] DEMOLISHING multinode-530000 ...
	I1003 18:43:38.309988   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:38.361688   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:43:38.361740   16913 stop.go:75] unable to get state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:38.361762   16913 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:38.362120   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:38.413042   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:38.413091   16913 delete.go:82] Unable to get host status for multinode-530000, assuming it has already been deleted: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:38.413177   16913 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:43:38.463203   16913 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:43:38.463232   16913 kic.go:367] could not find the container multinode-530000 to remove it. will try anyways
	I1003 18:43:38.463307   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:38.513449   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:43:38.513491   16913 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:38.513583   16913 cli_runner.go:164] Run: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0"
	W1003 18:43:38.564166   16913 cli_runner.go:211] docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 18:43:38.564191   16913 oci.go:647] error shutdown multinode-530000: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:39.566646   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:39.621880   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:39.621922   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:39.621933   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:39.621967   16913 retry.go:31] will retry after 586.008804ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:40.210394   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:40.264841   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:40.264885   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:40.264897   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:40.264919   16913 retry.go:31] will retry after 707.767959ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:40.973206   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:41.027287   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:41.027335   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:41.027346   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:41.027379   16913 retry.go:31] will retry after 1.494640568s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:42.523268   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:42.577772   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:42.577812   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:42.577824   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:42.577847   16913 retry.go:31] will retry after 2.149777176s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:44.727958   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:44.781737   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:44.781780   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:44.781795   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:44.781815   16913 retry.go:31] will retry after 2.386074928s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:47.168343   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:47.222701   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:47.222752   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:47.222764   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:47.222784   16913 retry.go:31] will retry after 3.719989424s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:50.945265   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:50.997809   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:50.997850   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:50.997862   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:50.997883   16913 retry.go:31] will retry after 6.371217282s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:57.370848   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:43:57.427215   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:43:57.427257   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:43:57.427266   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:43:57.427294   16913 oci.go:88] couldn't shut down multinode-530000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	 
	I1003 18:43:57.427370   16913 cli_runner.go:164] Run: docker rm -f -v multinode-530000
	I1003 18:43:57.478135   16913 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:43:57.527567   16913 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:43:57.527680   16913 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:43:57.578417   16913 cli_runner.go:164] Run: docker network rm multinode-530000
	I1003 18:43:57.671862   16913 fix.go:114] Sleeping 1 second for extra luck!
	I1003 18:43:58.672940   16913 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:43:58.694398   16913 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 18:43:58.694506   16913 start.go:159] libmachine.API.Create for "multinode-530000" (driver="docker")
	I1003 18:43:58.694526   16913 client.go:168] LocalClient.Create starting
	I1003 18:43:58.694647   16913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:43:58.694696   16913 main.go:141] libmachine: Decoding PEM data...
	I1003 18:43:58.694713   16913 main.go:141] libmachine: Parsing certificate...
	I1003 18:43:58.694779   16913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:43:58.694810   16913 main.go:141] libmachine: Decoding PEM data...
	I1003 18:43:58.694826   16913 main.go:141] libmachine: Parsing certificate...
	I1003 18:43:58.695203   16913 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:43:58.746582   16913 cli_runner.go:211] docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:43:58.746677   16913 network_create.go:281] running [docker network inspect multinode-530000] to gather additional debugging logs...
	I1003 18:43:58.746693   16913 cli_runner.go:164] Run: docker network inspect multinode-530000
	W1003 18:43:58.796159   16913 cli_runner.go:211] docker network inspect multinode-530000 returned with exit code 1
	I1003 18:43:58.796193   16913 network_create.go:284] error running [docker network inspect multinode-530000]: docker network inspect multinode-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-530000 not found
	I1003 18:43:58.796203   16913 network_create.go:286] output of [docker network inspect multinode-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-530000 not found
	
	** /stderr **
	I1003 18:43:58.796354   16913 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:43:58.847894   16913 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:43:58.848314   16913 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001117180}
	I1003 18:43:58.848331   16913 network_create.go:124] attempt to create docker network multinode-530000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1003 18:43:58.848395   16913 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	I1003 18:43:58.935298   16913 network_create.go:108] docker network multinode-530000 192.168.58.0/24 created
	I1003 18:43:58.935335   16913 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-530000" container
	I1003 18:43:58.935439   16913 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:43:58.986569   16913 cli_runner.go:164] Run: docker volume create multinode-530000 --label name.minikube.sigs.k8s.io=multinode-530000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:43:59.036707   16913 oci.go:103] Successfully created a docker volume multinode-530000
	I1003 18:43:59.036818   16913 cli_runner.go:164] Run: docker run --rm --name multinode-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-530000 --entrypoint /usr/bin/test -v multinode-530000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:43:59.365367   16913 oci.go:107] Successfully prepared a docker volume multinode-530000
	I1003 18:43:59.365399   16913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:43:59.365413   16913 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:43:59.365530   16913 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-530000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:49:58.708231   16913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:49:58.708363   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:49:58.761318   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:49:58.761432   16913 retry.go:31] will retry after 259.86864ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:49:59.021845   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:49:59.074322   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:49:59.074422   16913 retry.go:31] will retry after 555.032284ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:49:59.631862   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:49:59.686270   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:49:59.686403   16913 retry.go:31] will retry after 288.749682ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:49:59.977541   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:00.033276   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:00.033380   16913 retry.go:31] will retry after 555.732488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:00.590509   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:00.644756   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:50:00.644855   16913 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:50:00.644873   16913 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:00.644928   16913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:50:00.644992   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:00.695035   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:00.695126   16913 retry.go:31] will retry after 154.034363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:00.850091   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:00.904546   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:00.904640   16913 retry.go:31] will retry after 467.146253ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:01.372823   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:01.428231   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:01.428334   16913 retry.go:31] will retry after 513.94332ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:01.944089   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:01.998839   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:50:01.998937   16913 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:50:01.998953   16913 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:01.998965   16913 start.go:128] duration metric: createHost completed in 6m3.312836066s
	I1003 18:50:01.999031   16913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:50:01.999084   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:02.049270   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:02.049355   16913 retry.go:31] will retry after 151.161356ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:02.202915   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:02.254922   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:02.255007   16913 retry.go:31] will retry after 495.972273ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:02.752319   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:02.805165   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:02.805250   16913 retry.go:31] will retry after 785.843559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:03.592681   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:03.646151   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:50:03.646243   16913 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:50:03.646260   16913 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:03.646326   16913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:50:03.646380   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:03.697414   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:03.697512   16913 retry.go:31] will retry after 321.352426ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:04.021257   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:04.072972   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:04.073072   16913 retry.go:31] will retry after 530.859044ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:04.606410   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:04.658086   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:50:04.658177   16913 retry.go:31] will retry after 762.376092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:05.421067   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:50:05.472719   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:50:05.472814   16913 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:50:05.472836   16913 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:05.472843   16913 fix.go:56] fixHost completed within 6m27.263438752s
	I1003 18:50:05.472850   16913 start.go:83] releasing machines lock for "multinode-530000", held for 6m27.263467662s
	W1003 18:50:05.472863   16913 start.go:688] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1003 18:50:05.472940   16913 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1003 18:50:05.472948   16913 start.go:703] Will try again in 5 seconds ...
	I1003 18:50:10.475578   16913 start.go:365] acquiring machines lock for multinode-530000: {Name:mk882a18747e7f7b67f76932b37aedc8a7b77799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:50:10.475770   16913 start.go:369] acquired machines lock for "multinode-530000" in 150.516µs
	I1003 18:50:10.475802   16913 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:50:10.475809   16913 fix.go:54] fixHost starting: 
	I1003 18:50:10.476244   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:10.530678   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:10.530718   16913 fix.go:102] recreateIfNeeded on multinode-530000: state= err=unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:10.530736   16913 fix.go:107] machineExists: false. err=machine does not exist
	I1003 18:50:10.552438   16913 out.go:177] * docker "multinode-530000" container is missing, will recreate.
	I1003 18:50:10.594764   16913 delete.go:124] DEMOLISHING multinode-530000 ...
	I1003 18:50:10.594882   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:10.645439   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:50:10.645477   16913 stop.go:75] unable to get state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:10.645502   16913 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:10.645882   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:10.696317   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:10.696367   16913 delete.go:82] Unable to get host status for multinode-530000, assuming it has already been deleted: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:10.696451   16913 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:50:10.747043   16913 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:50:10.747071   16913 kic.go:367] could not find the container multinode-530000 to remove it. will try anyways
	I1003 18:50:10.747152   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:10.796557   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:50:10.796597   16913 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:10.796672   16913 cli_runner.go:164] Run: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0"
	W1003 18:50:10.846981   16913 cli_runner.go:211] docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 18:50:10.847006   16913 oci.go:647] error shutdown multinode-530000: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:11.847454   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:11.901423   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:11.901462   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:11.901473   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:11.901495   16913 retry.go:31] will retry after 579.21585ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:12.483097   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:12.537554   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:12.537602   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:12.537616   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:12.537635   16913 retry.go:31] will retry after 629.502713ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:13.168131   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:13.219778   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:13.219819   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:13.219830   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:13.219851   16913 retry.go:31] will retry after 1.185226055s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:14.406947   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:14.459550   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:14.459611   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:14.459624   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:14.459647   16913 retry.go:31] will retry after 2.062527378s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:16.522553   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:16.574383   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:16.574429   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:16.574440   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:16.574461   16913 retry.go:31] will retry after 2.290445317s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:18.867304   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:18.918754   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:18.918796   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:18.918809   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:18.918832   16913 retry.go:31] will retry after 4.230804941s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:23.152169   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:23.205703   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:23.205744   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:23.205761   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:23.205782   16913 retry.go:31] will retry after 3.273798548s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:26.481908   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:26.536476   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:26.536520   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:26.536530   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:26.536549   16913 retry.go:31] will retry after 4.388308954s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:30.927283   16913 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:50:30.980746   16913 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:50:30.980788   16913 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:50:30.980800   16913 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:50:30.980826   16913 oci.go:88] couldn't shut down multinode-530000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	 
	I1003 18:50:30.980906   16913 cli_runner.go:164] Run: docker rm -f -v multinode-530000
	I1003 18:50:31.033162   16913 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:50:31.082812   16913 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:50:31.082929   16913 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:50:31.132962   16913 cli_runner.go:164] Run: docker network rm multinode-530000
	I1003 18:50:31.243590   16913 fix.go:114] Sleeping 1 second for extra luck!
	I1003 18:50:32.245783   16913 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:50:32.267884   16913 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 18:50:32.268054   16913 start.go:159] libmachine.API.Create for "multinode-530000" (driver="docker")
	I1003 18:50:32.268091   16913 client.go:168] LocalClient.Create starting
	I1003 18:50:32.268297   16913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:50:32.268387   16913 main.go:141] libmachine: Decoding PEM data...
	I1003 18:50:32.268416   16913 main.go:141] libmachine: Parsing certificate...
	I1003 18:50:32.268492   16913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:50:32.268553   16913 main.go:141] libmachine: Decoding PEM data...
	I1003 18:50:32.268580   16913 main.go:141] libmachine: Parsing certificate...
	I1003 18:50:32.269231   16913 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:50:32.323479   16913 cli_runner.go:211] docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:50:32.323557   16913 network_create.go:281] running [docker network inspect multinode-530000] to gather additional debugging logs...
	I1003 18:50:32.323572   16913 cli_runner.go:164] Run: docker network inspect multinode-530000
	W1003 18:50:32.373445   16913 cli_runner.go:211] docker network inspect multinode-530000 returned with exit code 1
	I1003 18:50:32.373471   16913 network_create.go:284] error running [docker network inspect multinode-530000]: docker network inspect multinode-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-530000 not found
	I1003 18:50:32.373483   16913 network_create.go:286] output of [docker network inspect multinode-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-530000 not found
	
	** /stderr **
	I1003 18:50:32.373623   16913 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:50:32.425823   16913 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:50:32.427256   16913 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:50:32.427638   16913 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a8e580}
	I1003 18:50:32.427657   16913 network_create.go:124] attempt to create docker network multinode-530000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1003 18:50:32.427732   16913 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	W1003 18:50:32.490968   16913 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000 returned with exit code 1
	W1003 18:50:32.491000   16913 network_create.go:149] failed to create docker network multinode-530000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1003 18:50:32.491018   16913 network_create.go:116] failed to create docker network multinode-530000 192.168.67.0/24, will retry: subnet is taken
	I1003 18:50:32.492477   16913 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:50:32.492853   16913 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004b8b50}
	I1003 18:50:32.492865   16913 network_create.go:124] attempt to create docker network multinode-530000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1003 18:50:32.492927   16913 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	I1003 18:50:32.579702   16913 network_create.go:108] docker network multinode-530000 192.168.76.0/24 created
	I1003 18:50:32.579735   16913 kic.go:117] calculated static IP "192.168.76.2" for the "multinode-530000" container
	I1003 18:50:32.579843   16913 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:50:32.631307   16913 cli_runner.go:164] Run: docker volume create multinode-530000 --label name.minikube.sigs.k8s.io=multinode-530000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:50:32.681843   16913 oci.go:103] Successfully created a docker volume multinode-530000
	I1003 18:50:32.681968   16913 cli_runner.go:164] Run: docker run --rm --name multinode-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-530000 --entrypoint /usr/bin/test -v multinode-530000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:50:32.996665   16913 oci.go:107] Successfully prepared a docker volume multinode-530000
	I1003 18:50:32.996696   16913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:50:32.996708   16913 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:50:32.996806   16913 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-530000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:56:32.281736   16913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:56:32.281855   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:32.338296   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:32.338404   16913 retry.go:31] will retry after 367.511033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:32.708253   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:32.764956   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:32.765064   16913 retry.go:31] will retry after 522.030138ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:33.289521   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:33.344188   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:33.344294   16913 retry.go:31] will retry after 753.620055ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:34.099285   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:34.155718   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:56:34.155829   16913 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:56:34.155848   16913 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:34.155902   16913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:56:34.155968   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:34.205685   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:34.205778   16913 retry.go:31] will retry after 310.177863ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:34.518367   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:34.570570   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:34.570665   16913 retry.go:31] will retry after 495.307051ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:35.068396   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:35.121337   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:35.121427   16913 retry.go:31] will retry after 791.491385ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:35.913750   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:35.966765   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:56:35.966864   16913 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:56:35.966884   16913 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:35.966891   16913 start.go:128] duration metric: createHost completed in 6m3.707850446s
	I1003 18:56:35.966961   16913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:56:35.967021   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:36.016784   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:36.016874   16913 retry.go:31] will retry after 188.628387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:36.207901   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:36.260942   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:36.261041   16913 retry.go:31] will retry after 241.326447ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:36.502920   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:36.558188   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:36.558273   16913 retry.go:31] will retry after 659.571157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:37.220339   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:37.271579   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:56:37.271674   16913 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:56:37.271700   16913 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:37.271755   16913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:56:37.271811   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:37.321328   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:37.321429   16913 retry.go:31] will retry after 320.481719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:37.643435   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:37.697763   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:37.697848   16913 retry.go:31] will retry after 467.961899ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:38.168250   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:38.223020   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	I1003 18:56:38.223126   16913 retry.go:31] will retry after 437.145211ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:38.660766   16913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000
	W1003 18:56:38.713519   16913 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000 returned with exit code 1
	W1003 18:56:38.713616   16913 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	W1003 18:56:38.713642   16913 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-530000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-530000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:38.713653   16913 fix.go:56] fixHost completed within 6m28.223760497s
	I1003 18:56:38.713659   16913 start.go:83] releasing machines lock for "multinode-530000", held for 6m28.223793878s
	W1003 18:56:38.713743   16913 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-530000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-530000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1003 18:56:38.757147   16913 out.go:177] 
	W1003 18:56:38.779045   16913 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1003 18:56:38.779108   16913 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1003 18:56:38.779153   16913 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1003 18:56:38.800059   16913 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-530000" : exit status 52
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-530000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "db382bf6531e9910b6117c935496d39a709ddf2881dabfc4c1567656392f60bd",
	        "Created": "2023-10-04T01:50:32.538719687Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (94.192994ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:56:39.070780   17278 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (795.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 node delete m03: exit status 80 (188.077138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-530000 node delete m03": exit status 80
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr: exit status 7 (95.267834ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:56:39.301197   17286 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:56:39.301393   17286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:39.301399   17286 out.go:309] Setting ErrFile to fd 2...
	I1003 18:56:39.301404   17286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:39.301600   17286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:56:39.301781   17286 out.go:303] Setting JSON to false
	I1003 18:56:39.301803   17286 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:56:39.301849   17286 notify.go:220] Checking for updates...
	I1003 18:56:39.303057   17286 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:56:39.303075   17286 status.go:255] checking status of multinode-530000 ...
	I1003 18:56:39.303472   17286 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:39.354403   17286 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:39.354451   17286 status.go:330] multinode-530000 host status = "" (err=state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	)
	I1003 18:56:39.354470   17286 status.go:257] multinode-530000 status: &{Name:multinode-530000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1003 18:56:39.354485   17286 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:56:39.354493   17286 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "db382bf6531e9910b6117c935496d39a709ddf2881dabfc4c1567656392f60bd",
	        "Created": "2023-10-04T01:50:32.538719687Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (94.217434ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:56:39.503751   17292 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (11.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 stop: exit status 82 (11.323669114s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	* Stopping node "multinode-530000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-530000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-530000 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status: exit status 7 (94.398122ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:56:50.922750   17315 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:56:50.922762   17315 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr: exit status 7 (94.909098ms)

                                                
                                                
-- stdout --
	multinode-530000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:56:50.964955   17319 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:56:50.965550   17319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:50.965561   17319 out.go:309] Setting ErrFile to fd 2...
	I1003 18:56:50.965568   17319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:50.966009   17319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:56:50.966487   17319 out.go:303] Setting JSON to false
	I1003 18:56:50.966511   17319 mustload.go:65] Loading cluster: multinode-530000
	I1003 18:56:50.966566   17319 notify.go:220] Checking for updates...
	I1003 18:56:50.966791   17319 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:56:50.966806   17319 status.go:255] checking status of multinode-530000 ...
	I1003 18:56:50.967241   17319 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:51.017725   17319 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:51.017769   17319 status.go:330] multinode-530000 host status = "" (err=state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	)
	I1003 18:56:51.017791   17319 status.go:257] multinode-530000 status: &{Name:multinode-530000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1003 18:56:51.017806   17319 status.go:260] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	E1003 18:56:51.017814   17319 status.go:263] The "multinode-530000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr": multinode-530000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-530000 status --alsologtostderr": multinode-530000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "db382bf6531e9910b6117c935496d39a709ddf2881dabfc4c1567656392f60bd",
	        "Created": "2023-10-04T01:50:32.538719687Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (93.846ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:56:51.166417   17325 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (11.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-530000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1003 18:56:59.857423   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 18:57:48.073384   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-530000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m2.388707375s)

                                                
                                                
-- stdout --
	* [multinode-530000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-530000 in cluster multinode-530000
	* Pulling base image ...
	* docker "multinode-530000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:56:51.263253   17331 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:56:51.263538   17331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:51.263543   17331 out.go:309] Setting ErrFile to fd 2...
	I1003 18:56:51.263547   17331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:51.263737   17331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 18:56:51.265046   17331 out.go:303] Setting JSON to false
	I1003 18:56:51.286872   17331 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8779,"bootTime":1696375832,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 18:56:51.286983   17331 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:56:51.308422   17331 out.go:177] * [multinode-530000] minikube v1.31.2 on Darwin 14.0
	I1003 18:56:51.349923   17331 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 18:56:51.350017   17331 notify.go:220] Checking for updates...
	I1003 18:56:51.393210   17331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 18:56:51.415100   17331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:56:51.436290   17331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:56:51.458173   17331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 18:56:51.481174   17331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:56:51.502840   17331 config.go:182] Loaded profile config "multinode-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:56:51.503606   17331 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:56:51.560806   17331 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:56:51.560935   17331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:56:51.660134   17331 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2023-10-04 01:56:51.649577047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:56:51.681852   17331 out.go:177] * Using the docker driver based on existing profile
	I1003 18:56:51.703554   17331 start.go:298] selected driver: docker
	I1003 18:56:51.703586   17331 start.go:902] validating driver "docker" against &{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:56:51.703702   17331 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:56:51.703892   17331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:56:51.807917   17331 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2023-10-04 01:56:51.797320167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:56:51.811009   17331 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:56:51.811049   17331 cni.go:84] Creating CNI manager for ""
	I1003 18:56:51.811058   17331 cni.go:136] 1 nodes found, recommending kindnet
	I1003 18:56:51.811070   17331 start_flags.go:321] config:
	{Name:multinode-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:56:51.854400   17331 out.go:177] * Starting control plane node multinode-530000 in cluster multinode-530000
	I1003 18:56:51.877363   17331 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:56:51.898558   17331 out.go:177] * Pulling base image ...
	I1003 18:56:51.940589   17331 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:56:51.940660   17331 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 18:56:51.940694   17331 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 18:56:51.940713   17331 cache.go:57] Caching tarball of preloaded images
	I1003 18:56:51.940898   17331 preload.go:174] Found /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 18:56:51.940920   17331 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 18:56:51.941729   17331 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/multinode-530000/config.json ...
	I1003 18:56:51.992294   17331 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 18:56:51.992309   17331 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 18:56:51.992328   17331 cache.go:195] Successfully downloaded all kic artifacts
	I1003 18:56:51.992368   17331 start.go:365] acquiring machines lock for multinode-530000: {Name:mk882a18747e7f7b67f76932b37aedc8a7b77799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:56:51.992454   17331 start.go:369] acquired machines lock for "multinode-530000" in 64.918µs
	I1003 18:56:51.992484   17331 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:56:51.992495   17331 fix.go:54] fixHost starting: 
	I1003 18:56:51.992749   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:52.042868   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:52.042930   17331 fix.go:102] recreateIfNeeded on multinode-530000: state= err=unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:52.042949   17331 fix.go:107] machineExists: false. err=machine does not exist
	I1003 18:56:52.064679   17331 out.go:177] * docker "multinode-530000" container is missing, will recreate.
	I1003 18:56:52.086334   17331 delete.go:124] DEMOLISHING multinode-530000 ...
	I1003 18:56:52.086570   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:52.137939   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:56:52.137994   17331 stop.go:75] unable to get state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:52.138014   17331 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:52.138367   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:52.188458   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:52.188502   17331 delete.go:82] Unable to get host status for multinode-530000, assuming it has already been deleted: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:52.188596   17331 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:56:52.238844   17331 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:56:52.238874   17331 kic.go:367] could not find the container multinode-530000 to remove it. will try anyways
	I1003 18:56:52.238943   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:52.288663   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	W1003 18:56:52.288705   17331 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:52.288787   17331 cli_runner.go:164] Run: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0"
	W1003 18:56:52.338722   17331 cli_runner.go:211] docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 18:56:52.338749   17331 oci.go:647] error shutdown multinode-530000: docker exec --privileged -t multinode-530000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:53.339379   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:53.393340   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:53.393387   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:53.393401   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:53.393453   17331 retry.go:31] will retry after 357.323846ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:53.753186   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:53.807886   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:53.807929   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:53.807942   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:53.807961   17331 retry.go:31] will retry after 945.602888ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:54.755082   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:54.811132   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:54.811181   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:54.811198   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:54.811219   17331 retry.go:31] will retry after 907.01949ms: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:55.720594   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:55.773652   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:55.773693   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:55.773708   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:55.773728   17331 retry.go:31] will retry after 2.027045362s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:57.801105   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:57.852699   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:57.852751   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:57.852764   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:57.852785   17331 retry.go:31] will retry after 1.943037135s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:59.796319   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:56:59.851587   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:56:59.851639   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:56:59.851649   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:56:59.851676   17331 retry.go:31] will retry after 3.759300537s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:57:03.611269   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:57:03.662487   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:57:03.662529   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:57:03.662547   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:57:03.662568   17331 retry.go:31] will retry after 6.08756148s: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:57:09.750780   17331 cli_runner.go:164] Run: docker container inspect multinode-530000 --format={{.State.Status}}
	W1003 18:57:09.806497   17331 cli_runner.go:211] docker container inspect multinode-530000 --format={{.State.Status}} returned with exit code 1
	I1003 18:57:09.806539   17331 oci.go:659] temporary error verifying shutdown: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	I1003 18:57:09.806551   17331 oci.go:661] temporary error: container multinode-530000 status is  but expect it to be exited
	I1003 18:57:09.806577   17331 oci.go:88] couldn't shut down multinode-530000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000
	 
	I1003 18:57:09.806662   17331 cli_runner.go:164] Run: docker rm -f -v multinode-530000
	I1003 18:57:09.858807   17331 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-530000
	W1003 18:57:09.908692   17331 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-530000 returned with exit code 1
	I1003 18:57:09.908804   17331 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:57:09.959572   17331 cli_runner.go:164] Run: docker network rm multinode-530000
	I1003 18:57:10.070932   17331 fix.go:114] Sleeping 1 second for extra luck!
	I1003 18:57:11.072383   17331 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:57:11.095332   17331 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 18:57:11.095575   17331 start.go:159] libmachine.API.Create for "multinode-530000" (driver="docker")
	I1003 18:57:11.095612   17331 client.go:168] LocalClient.Create starting
	I1003 18:57:11.095840   17331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 18:57:11.095929   17331 main.go:141] libmachine: Decoding PEM data...
	I1003 18:57:11.095962   17331 main.go:141] libmachine: Parsing certificate...
	I1003 18:57:11.096080   17331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 18:57:11.096145   17331 main.go:141] libmachine: Decoding PEM data...
	I1003 18:57:11.096171   17331 main.go:141] libmachine: Parsing certificate...
	I1003 18:57:11.117106   17331 cli_runner.go:164] Run: docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:57:11.168598   17331 cli_runner.go:211] docker network inspect multinode-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:57:11.168685   17331 network_create.go:281] running [docker network inspect multinode-530000] to gather additional debugging logs...
	I1003 18:57:11.168703   17331 cli_runner.go:164] Run: docker network inspect multinode-530000
	W1003 18:57:11.219480   17331 cli_runner.go:211] docker network inspect multinode-530000 returned with exit code 1
	I1003 18:57:11.219506   17331 network_create.go:284] error running [docker network inspect multinode-530000]: docker network inspect multinode-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-530000 not found
	I1003 18:57:11.219515   17331 network_create.go:286] output of [docker network inspect multinode-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-530000 not found
	
	** /stderr **
	I1003 18:57:11.219650   17331 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:57:11.294498   17331 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 18:57:11.294868   17331 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f674b0}
	I1003 18:57:11.294886   17331 network_create.go:124] attempt to create docker network multinode-530000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1003 18:57:11.294954   17331 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-530000 multinode-530000
	I1003 18:57:11.381694   17331 network_create.go:108] docker network multinode-530000 192.168.58.0/24 created
	I1003 18:57:11.381731   17331 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-530000" container
	I1003 18:57:11.381857   17331 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:57:11.433076   17331 cli_runner.go:164] Run: docker volume create multinode-530000 --label name.minikube.sigs.k8s.io=multinode-530000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:57:11.483105   17331 oci.go:103] Successfully created a docker volume multinode-530000
	I1003 18:57:11.483235   17331 cli_runner.go:164] Run: docker run --rm --name multinode-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-530000 --entrypoint /usr/bin/test -v multinode-530000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 18:57:11.797588   17331 oci.go:107] Successfully prepared a docker volume multinode-530000
	I1003 18:57:11.797635   17331 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:57:11.797649   17331 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:57:11.797751   17331 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-530000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-530000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-530000
helpers_test.go:235: (dbg) docker inspect multinode-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-530000",
	        "Id": "ebe903b12570fe01edad99e39f079cce03bd60d206a93f5140e91582413e9b9b",
	        "Created": "2023-10-04T01:57:11.341534998Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-530000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-530000 -n multinode-530000: exit status 7 (94.957932ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:58:53.772929   17440 status.go:249] status error: host: state: unknown state "multinode-530000": docker container inspect multinode-530000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-530000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-530000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (122.60s)

                                                
                                    
x
+
TestScheduledStopUnix (300.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-205000 --memory=2048 --driver=docker 
E1003 19:01:59.868781   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:02:48.081918   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:03:22.924748   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-205000 --memory=2048 --driver=docker : signal: killed (5m0.00391406s)

                                                
                                                
-- stdout --
	* [scheduled-stop-205000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-205000 in cluster scheduled-stop-205000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-205000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-205000 in cluster scheduled-stop-205000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-03 19:06:12.880225 -0700 PDT m=+4637.824911588
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-205000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-205000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-205000",
	        "Id": "c2b413e5b7d8f0d94bbff711da5e5552a3ce6fb27f3b152bd21ba2aa6d5061ec",
	        "Created": "2023-10-04T02:01:13.873251893Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-205000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-205000 -n scheduled-stop-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-205000 -n scheduled-stop-205000: exit status 7 (94.979725ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:06:13.031338   18107 status.go:249] status error: host: state: unknown state "scheduled-stop-205000": docker container inspect scheduled-stop-205000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-205000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-205000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-205000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-205000
--- FAIL: TestScheduledStopUnix (300.93s)

                                                
                                    
x
+
TestSkaffold (300.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3591652761 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-375000 --memory=2600 --driver=docker 
E1003 19:06:59.878425   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:07:48.094208   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:09:11.207618   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-375000 --memory=2600 --driver=docker : signal: killed (4m57.862207809s)

                                                
                                                
-- stdout --
	* [skaffold-375000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-375000 in cluster skaffold-375000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-375000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-375000 in cluster skaffold-375000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-10-03 19:11:13.817187 -0700 PDT m=+4938.751280863
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-375000
helpers_test.go:235: (dbg) docker inspect skaffold-375000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-375000",
	        "Id": "408d45bd894804018ad59a8bc2a1b97389434cdfd99537dfd9370a02441cd3c8",
	        "Created": "2023-10-04T02:06:16.988436118Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-375000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-375000 -n skaffold-375000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-375000 -n skaffold-375000: exit status 7 (94.162523ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:11:13.969164   18256 status.go:249] status error: host: state: unknown state "skaffold-375000": docker container inspect skaffold-375000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-375000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-375000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-375000
--- FAIL: TestSkaffold (300.86s)

                                                
                                    
x
+
TestInsufficientStorage (300.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-569000 --memory=2048 --output=json --wait=true --driver=docker 
E1003 19:11:59.889003   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:12:48.104755   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-569000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.002730701s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82d6a759-4632-4baa-9ae6-d777162f8bf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-569000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad05e724-5569-417a-8d47-caacc2646a79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17345"}}
	{"specversion":"1.0","id":"4922d528-a72c-438c-b754-df519c5d1f8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig"}}
	{"specversion":"1.0","id":"3592383c-395e-4466-abbb-032878429b63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"40063e5f-c6cd-4c09-aa2e-a22c456256a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58c71f5d-140a-4a9d-ac99-ed49381a91f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube"}}
	{"specversion":"1.0","id":"078af77d-a389-4036-9a14-3c1e61ce21c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fc29c5b-277c-44da-9c1f-c2a46ccb0135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"492ccd5b-8d6f-471d-8cc5-60280113ef89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f495ee07-ad96-4416-bf90-988aef10a3a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e3868fd-514f-4b35-b84d-ebfc549cd1d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"2490acf1-187e-45bc-bf32-c5097ea20373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-569000 in cluster insufficient-storage-569000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c4cd8dd-4431-4c7f-bec9-8d84f6130f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6f70f65-f110-48bf-af8e-200c54e55b8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-569000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-569000 --output=json --layout=cluster: context deadline exceeded (648ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-569000
--- FAIL: TestInsufficientStorage (300.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (770.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E1003 19:31:59.931771   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:32:48.146807   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:36:42.997771   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:36:59.943266   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:37:48.157494   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 52 (12m35.835316086s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-325000 in cluster kubernetes-upgrade-325000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-325000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:28:51.111712   19037 out.go:296] Setting OutFile to fd 1 ...
	I1003 19:28:51.112003   19037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 19:28:51.112008   19037 out.go:309] Setting ErrFile to fd 2...
	I1003 19:28:51.112012   19037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 19:28:51.112201   19037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 19:28:51.113596   19037 out.go:303] Setting JSON to false
	I1003 19:28:51.136170   19037 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10699,"bootTime":1696375832,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 19:28:51.136252   19037 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 19:28:51.157327   19037 out.go:177] * [kubernetes-upgrade-325000] minikube v1.31.2 on Darwin 14.0
	I1003 19:28:51.199355   19037 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 19:28:51.199435   19037 notify.go:220] Checking for updates...
	I1003 19:28:51.242076   19037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 19:28:51.263006   19037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 19:28:51.284362   19037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:28:51.305324   19037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 19:28:51.327224   19037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:28:51.349154   19037 config.go:182] Loaded profile config "missing-upgrade-672000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1003 19:28:51.349304   19037 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 19:28:51.406748   19037 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 19:28:51.406889   19037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:28:51.507705   19037 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:2 ContainersPaused:0 ContainersStopped:11 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:183 SystemTime:2023-10-04 02:28:51.496297365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 19:28:51.529522   19037 out.go:177] * Using the docker driver based on user configuration
	I1003 19:28:51.551198   19037 start.go:298] selected driver: docker
	I1003 19:28:51.551228   19037 start.go:902] validating driver "docker" against <nil>
	I1003 19:28:51.551242   19037 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:28:51.555571   19037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:28:51.657563   19037 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:2 ContainersPaused:0 ContainersStopped:11 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:183 SystemTime:2023-10-04 02:28:51.647393392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 19:28:51.657779   19037 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 19:28:51.657954   19037 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:28:51.679641   19037 out.go:177] * Using Docker Desktop driver with root privileges
	I1003 19:28:51.701445   19037 cni.go:84] Creating CNI manager for ""
	I1003 19:28:51.701480   19037 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 19:28:51.701498   19037 start_flags.go:321] config:
	{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 19:28:51.744433   19037 out.go:177] * Starting control plane node kubernetes-upgrade-325000 in cluster kubernetes-upgrade-325000
	I1003 19:28:51.765479   19037 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 19:28:51.786307   19037 out.go:177] * Pulling base image ...
	I1003 19:28:51.828418   19037 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 19:28:51.828506   19037 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1003 19:28:51.828511   19037 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 19:28:51.828534   19037 cache.go:57] Caching tarball of preloaded images
	I1003 19:28:51.828765   19037 preload.go:174] Found /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 19:28:51.828788   19037 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 19:28:51.828938   19037 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/kubernetes-upgrade-325000/config.json ...
	I1003 19:28:51.829612   19037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/kubernetes-upgrade-325000/config.json: {Name:mk8aed5c76e32324e2129e4952090b619c1f6e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:28:51.882986   19037 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1003 19:28:51.883013   19037 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1003 19:28:51.883034   19037 cache.go:195] Successfully downloaded all kic artifacts
	I1003 19:28:51.883081   19037 start.go:365] acquiring machines lock for kubernetes-upgrade-325000: {Name:mka3d20923db051aa6f712bec78a6ddb77f8c2d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:28:51.883237   19037 start.go:369] acquired machines lock for "kubernetes-upgrade-325000" in 144.247µs
	I1003 19:28:51.883263   19037 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 19:28:51.883342   19037 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:28:51.906169   19037 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 19:28:51.906605   19037 start.go:159] libmachine.API.Create for "kubernetes-upgrade-325000" (driver="docker")
	I1003 19:28:51.906656   19037 client.go:168] LocalClient.Create starting
	I1003 19:28:51.906824   19037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 19:28:51.906909   19037 main.go:141] libmachine: Decoding PEM data...
	I1003 19:28:51.906943   19037 main.go:141] libmachine: Parsing certificate...
	I1003 19:28:51.907065   19037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 19:28:51.907127   19037 main.go:141] libmachine: Decoding PEM data...
	I1003 19:28:51.907173   19037 main.go:141] libmachine: Parsing certificate...
	I1003 19:28:51.907998   19037 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:28:51.959382   19037 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:28:51.959481   19037 network_create.go:281] running [docker network inspect kubernetes-upgrade-325000] to gather additional debugging logs...
	I1003 19:28:51.959497   19037 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000
	W1003 19:28:52.009696   19037 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:28:52.009727   19037 network_create.go:284] error running [docker network inspect kubernetes-upgrade-325000]: docker network inspect kubernetes-upgrade-325000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-325000 not found
	I1003 19:28:52.009757   19037 network_create.go:286] output of [docker network inspect kubernetes-upgrade-325000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-325000 not found
	
	** /stderr **
	I1003 19:28:52.009884   19037 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:28:52.061753   19037 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:28:52.062203   19037 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000bb9d10}
	I1003 19:28:52.062219   19037 network_create.go:124] attempt to create docker network kubernetes-upgrade-325000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1003 19:28:52.062286   19037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	I1003 19:28:52.148746   19037 network_create.go:108] docker network kubernetes-upgrade-325000 192.168.58.0/24 created
	I1003 19:28:52.148785   19037 kic.go:117] calculated static IP "192.168.58.2" for the "kubernetes-upgrade-325000" container
	I1003 19:28:52.148902   19037 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:28:52.201384   19037 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-325000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:28:52.252792   19037 oci.go:103] Successfully created a docker volume kubernetes-upgrade-325000
	I1003 19:28:52.252915   19037 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-325000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --entrypoint /usr/bin/test -v kubernetes-upgrade-325000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 19:28:52.669415   19037 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-325000
	I1003 19:28:52.669449   19037 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 19:28:52.669463   19037 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 19:28:52.669576   19037 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-325000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:34:51.921526   19037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:34:51.921656   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:51.975723   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:51.975861   19037 retry.go:31] will retry after 267.509297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:52.245476   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:52.301263   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:52.301370   19037 retry.go:31] will retry after 326.551834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:52.630345   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:52.684959   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:52.685046   19037 retry.go:31] will retry after 482.636261ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:53.169116   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:53.223904   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:34:53.224019   19037 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:34:53.224037   19037 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:53.224095   19037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:34:53.224163   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:53.273931   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:53.274032   19037 retry.go:31] will retry after 279.804436ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:53.556271   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:53.611189   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:53.611286   19037 retry.go:31] will retry after 314.564385ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:53.926853   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:53.978230   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:53.978320   19037 retry.go:31] will retry after 382.091358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:54.361372   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:34:54.413557   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:34:54.413659   19037 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:34:54.413684   19037 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:54.413697   19037 start.go:128] duration metric: createHost completed in 6m2.516993067s
	I1003 19:34:54.413705   19037 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 6m2.517117146s
	W1003 19:34:54.413717   19037 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1003 19:34:54.414131   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:34:54.464125   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:34:54.464175   19037 delete.go:82] Unable to get host status for kubernetes-upgrade-325000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	W1003 19:34:54.464265   19037 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1003 19:34:54.464275   19037 start.go:703] Will try again in 5 seconds ...
	I1003 19:34:59.466608   19037 start.go:365] acquiring machines lock for kubernetes-upgrade-325000: {Name:mka3d20923db051aa6f712bec78a6ddb77f8c2d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:34:59.467454   19037 start.go:369] acquired machines lock for "kubernetes-upgrade-325000" in 775.817µs
	I1003 19:34:59.467533   19037 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:34:59.467555   19037 fix.go:54] fixHost starting: 
	I1003 19:34:59.468052   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:34:59.522925   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:34:59.522967   19037 fix.go:102] recreateIfNeeded on kubernetes-upgrade-325000: state= err=unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:59.523008   19037 fix.go:107] machineExists: false. err=machine does not exist
	I1003 19:34:59.544349   19037 out.go:177] * docker "kubernetes-upgrade-325000" container is missing, will recreate.
	I1003 19:34:59.586128   19037 delete.go:124] DEMOLISHING kubernetes-upgrade-325000 ...
	I1003 19:34:59.586342   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:34:59.637650   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	W1003 19:34:59.637699   19037 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:59.637720   19037 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:59.638089   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:34:59.688323   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:34:59.688377   19037 delete.go:82] Unable to get host status for kubernetes-upgrade-325000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:59.688468   19037 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-325000
	W1003 19:34:59.738846   19037 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:34:59.738882   19037 kic.go:367] could not find the container kubernetes-upgrade-325000 to remove it. will try anyways
	I1003 19:34:59.738949   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:34:59.788656   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	W1003 19:34:59.788701   19037 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:34:59.788789   19037 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-325000 /bin/bash -c "sudo init 0"
	W1003 19:34:59.838711   19037 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-325000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1003 19:34:59.838748   19037 oci.go:647] error shutdown kubernetes-upgrade-325000: docker exec --privileged -t kubernetes-upgrade-325000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:00.841205   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:00.894320   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:00.894369   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:00.894388   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:00.894413   19037 retry.go:31] will retry after 647.01843ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:01.543184   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:01.598281   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:01.598336   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:01.598345   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:01.598367   19037 retry.go:31] will retry after 859.786546ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:02.458441   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:02.511396   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:02.511453   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:02.511467   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:02.511490   19037 retry.go:31] will retry after 569.310366ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:03.082701   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:03.136517   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:03.136565   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:03.136586   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:03.136610   19037 retry.go:31] will retry after 965.583933ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:04.103072   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:04.155783   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:04.155834   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:04.155852   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:04.155874   19037 retry.go:31] will retry after 3.061457842s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:07.219133   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:07.272670   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:07.272715   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:07.272732   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:07.272752   19037 retry.go:31] will retry after 3.716341457s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:10.991680   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:11.045118   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:11.045164   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:11.045174   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:11.045196   19037 retry.go:31] will retry after 8.388970093s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:19.437202   19037 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	W1003 19:35:19.491496   19037 cli_runner.go:211] docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}} returned with exit code 1
	I1003 19:35:19.491541   19037 oci.go:659] temporary error verifying shutdown: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:35:19.491552   19037 oci.go:661] temporary error: container kubernetes-upgrade-325000 status is  but expect it to be exited
	I1003 19:35:19.491579   19037 oci.go:88] couldn't shut down kubernetes-upgrade-325000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	 
	I1003 19:35:19.491656   19037 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-325000
	I1003 19:35:19.543269   19037 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-325000
	W1003 19:35:19.592713   19037 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:35:19.592827   19037 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:35:19.643408   19037 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-325000
	I1003 19:35:19.752767   19037 fix.go:114] Sleeping 1 second for extra luck!
	I1003 19:35:20.754967   19037 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:35:20.776993   19037 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1003 19:35:20.777168   19037 start.go:159] libmachine.API.Create for "kubernetes-upgrade-325000" (driver="docker")
	I1003 19:35:20.777230   19037 client.go:168] LocalClient.Create starting
	I1003 19:35:20.777422   19037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/ca.pem
	I1003 19:35:20.777502   19037 main.go:141] libmachine: Decoding PEM data...
	I1003 19:35:20.777531   19037 main.go:141] libmachine: Parsing certificate...
	I1003 19:35:20.777612   19037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-10413/.minikube/certs/cert.pem
	I1003 19:35:20.777672   19037 main.go:141] libmachine: Decoding PEM data...
	I1003 19:35:20.777687   19037 main.go:141] libmachine: Parsing certificate...
	I1003 19:35:20.778340   19037 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:35:20.832383   19037 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:35:20.832490   19037 network_create.go:281] running [docker network inspect kubernetes-upgrade-325000] to gather additional debugging logs...
	I1003 19:35:20.832506   19037 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000
	W1003 19:35:20.882885   19037 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:35:20.882912   19037 network_create.go:284] error running [docker network inspect kubernetes-upgrade-325000]: docker network inspect kubernetes-upgrade-325000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-325000 not found
	I1003 19:35:20.882931   19037 network_create.go:286] output of [docker network inspect kubernetes-upgrade-325000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-325000 not found
	
	** /stderr **
	I1003 19:35:20.883086   19037 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:35:20.934726   19037 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:35:20.936331   19037 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:35:20.936716   19037 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000871950}
	I1003 19:35:20.936730   19037 network_create.go:124] attempt to create docker network kubernetes-upgrade-325000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1003 19:35:20.936807   19037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	W1003 19:35:20.987199   19037 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:35:20.987243   19037 network_create.go:149] failed to create docker network kubernetes-upgrade-325000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1003 19:35:20.987255   19037 network_create.go:116] failed to create docker network kubernetes-upgrade-325000 192.168.67.0/24, will retry: subnet is taken
	I1003 19:35:20.988917   19037 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1003 19:35:20.989305   19037 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f66250}
	I1003 19:35:20.989321   19037 network_create.go:124] attempt to create docker network kubernetes-upgrade-325000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1003 19:35:20.989388   19037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	I1003 19:35:21.075045   19037 network_create.go:108] docker network kubernetes-upgrade-325000 192.168.76.0/24 created
	I1003 19:35:21.075074   19037 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-325000" container
	I1003 19:35:21.075197   19037 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:35:21.128322   19037 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-325000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:35:21.199628   19037 oci.go:103] Successfully created a docker volume kubernetes-upgrade-325000
	I1003 19:35:21.199751   19037 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-325000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --entrypoint /usr/bin/test -v kubernetes-upgrade-325000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1003 19:35:21.531770   19037 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-325000
	I1003 19:35:21.531801   19037 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 19:35:21.531813   19037 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 19:35:21.531906   19037 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-325000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:41:20.791703   19037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:41:20.791820   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:20.845664   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:20.845785   19037 retry.go:31] will retry after 311.091372ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:21.157736   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:21.215358   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:21.215470   19037 retry.go:31] will retry after 459.366426ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:21.677280   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:21.731438   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:21.731544   19037 retry.go:31] will retry after 710.873835ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:22.444865   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:22.499582   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:41:22.499720   19037 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:41:22.499746   19037 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:22.499794   19037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:41:22.499847   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:22.552127   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:22.552226   19037 retry.go:31] will retry after 329.782226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:22.882956   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:22.935613   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:22.935746   19037 retry.go:31] will retry after 387.674624ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:23.325585   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:23.377567   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:23.377663   19037 retry.go:31] will retry after 292.696329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:23.672797   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:23.726451   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:41:23.726551   19037 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:41:23.726568   19037 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:23.726579   19037 start.go:128] duration metric: createHost completed in 6m2.958227094s
	I1003 19:41:23.726644   19037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:41:23.726719   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:23.776554   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:23.776667   19037 retry.go:31] will retry after 334.968481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:24.112488   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:24.166587   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:24.166685   19037 retry.go:31] will retry after 256.046476ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:24.425165   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:24.477095   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:24.477181   19037 retry.go:31] will retry after 777.170495ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:25.254824   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:25.309339   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:41:25.309436   19037 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:41:25.309459   19037 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:25.309532   19037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:41:25.309602   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:25.360668   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:25.360763   19037 retry.go:31] will retry after 139.008357ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:25.502188   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:25.553967   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:25.554068   19037 retry.go:31] will retry after 562.450609ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:26.118195   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:26.171363   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	I1003 19:41:26.171444   19037 retry.go:31] will retry after 554.649281ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:26.728572   19037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	W1003 19:41:26.781798   19037 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000 returned with exit code 1
	W1003 19:41:26.781911   19037 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	W1003 19:41:26.781928   19037 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-325000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	I1003 19:41:26.781936   19037 fix.go:56] fixHost completed within 6m27.300124397s
	I1003 19:41:26.781945   19037 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 6m27.30020121s
	W1003 19:41:26.782030   19037 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1003 19:41:26.825368   19037 out.go:177] 
	W1003 19:41:26.846559   19037 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1003 19:41:26.846620   19037 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1003 19:41:26.846754   19037 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1003 19:41:26.889431   19037 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 52
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-325000
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-325000: exit status 82 (13.598736791s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-325000"  ...
	* Stopping node "kubernetes-upgrade-325000"  ...
	* Stopping node "kubernetes-upgrade-325000"  ...
	* Stopping node "kubernetes-upgrade-325000"  ...
	* Stopping node "kubernetes-upgrade-325000"  ...
	* Stopping node "kubernetes-upgrade-325000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect kubernetes-upgrade-325000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:242: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-325000 failed: exit status 82
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-03 19:41:40.54494 -0700 PDT m=+6765.413394838
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-325000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-325000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-325000",
	        "Id": "61079f8cf1d253a9abc3085106b18375bb63a1582263d4afd58527c1caf55ba4",
	        "Created": "2023-10-04T02:35:21.035241889Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-325000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000: exit status 7 (94.157439ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:41:40.694329   19393 status.go:249] status error: host: state: unknown state "kubernetes-upgrade-325000": docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-325000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-325000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-325000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-325000
--- FAIL: TestKubernetesUpgrade (770.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (7200.699s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3814244707.exe start -p missing-upgrade-672000 --memory=2200 --driver=docker 
E1003 19:16:59.899918   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:17:48.115072   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:20:02.960272   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:21:59.909943   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:22:48.125904   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:25:51.244257   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
E1003 19:26:59.920916   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3814244707.exe start -p missing-upgrade-672000 --memory=2200 --driver=docker : exit status 70 (11m23.212900527s)

                                                
                                                
-- stdout --
	! [missing-upgrade-672000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (13 available), Memory=2200MB (5939MB available) ...
	! StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds
	* Deleting "missing-upgrade-672000" in docker ...
	* Updating the running docker "missing-upgrade-672000" container ...
	* StartHost failed again: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: EOF
	  - Run: "minikube delete -p missing-upgrade-672000", then "minikube start -p missing-upgrade-672000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 142.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 269.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 276.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 376.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 397.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 496.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 534.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: EOF
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3814244707.exe start -p missing-upgrade-672000 --memory=2200 --driver=docker 
E1003 19:27:48.134753   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3814244707.exe start -p missing-upgrade-672000 --memory=2200 --driver=docker : exit status 70 (17m0.851485561s)

                                                
                                                
-- stdout --
	* [missing-upgrade-672000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-672000" container ...
	! StartHost failed, but will try again: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: EOF
	* Updating the running docker "missing-upgrade-672000" container ...
	* StartHost failed again: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: EOF
	  - Run: "minikube delete -p missing-upgrade-672000", then "minikube start -p missing-upgrade-672000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: EOF
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3814244707.exe start -p missing-upgrade-672000 --memory=2200 --driver=docker 
E1003 19:47:00.016445   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 19:47:48.231330   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestMissingContainerUpgrade (32m34s)
	TestNetworkPlugins (32m40s)
	TestNetworkPlugins/group (32m40s)
	TestStoppedBinaryUpgrade (7m14s)
	TestStoppedBinaryUpgrade/Upgrade (7m13s)

                                                
                                                
goroutine 1933 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000622b60, 0xc0009d3b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc000488000?, {0x4c1dc80, 0x2a, 0x2a}, {0x10b00a5?, 0xc000068180?, 0x4c3f380?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc000488000)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00015b480)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 508 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0011744e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0011744e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0011744e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc0011744e0, 0x34d5f18)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1918 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011329a0, 0xc0014c5500)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1915
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 506 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0011741a0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0011741a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0011741a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0011741a0, 0x34d5ef0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1917 [IO wait, 7 minutes]:
internal/poll.runtime_pollWait(0x4c48b630, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000af3d40?, 0xc000ee6c58?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000af3d40, {0xc000ee6c58, 0x3a8, 0x3a8})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a29120, {0xc000ee6c58?, 0xc000e2ce68?, 0xc000e2ce68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000f7b650, {0x39259c0, 0xc000a29120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000f7b650}, {0x39259c0, 0xc000a29120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0014c47e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1915
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1141 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1137 +0x171

                                                
                                                
goroutine 127 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 126
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 504 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000501a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000501a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000501a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc000501a00, 0x34d5ee0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 125 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000947a10, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000af2d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000947a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x44ec43e2dc6?, {0x3926ee0, 0xc00075c8d0}, 0x1, 0xc000ab2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ab3140?, 0x3b9aca00, 0x0, 0xd0?, 0x10446bc?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117c225?, 0xc000664f20?, 0xc000ab31a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 126 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000ab2060}, 0xc000e2f750, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000ab2060}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000ab2060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1799 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7fd40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7fd40, 0xc000546d80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 507 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001174340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001174340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc001174340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc001174340, 0x34d5f20)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 512 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001174b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001174b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc001174b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc001174b60, 0x34d5f40)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 139 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000af2f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 129
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 140 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000947a40, 0xc000ab2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 129
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1916 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x4c48bc00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000af3c80?, 0xc000c0eaf1?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000af3c80, {0xc000c0eaf1, 0x50f, 0x50f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a290d8, {0xc000c0eaf1?, 0xc000af3320?, 0xc000e2f668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000f7b620, {0x39259c0, 0xc000a290d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000f7b620}, {0x39259c0, 0xc000a290d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0014c4600?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1915
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1704 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014fc4e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0014fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0014fc4e0, 0x34d5fd8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1703 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014fc340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0014fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0014fc340, 0x34d5fc8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1915 [syscall, 7 minutes]:
syscall.syscall6(0x1010585?, 0xc0011e9788?, 0xc0011e9678?, 0xc0011e97a8?, 0x100c0011e9770?, 0x1000000000003?, 0x537a070?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0011e9720?, 0x1010905?, 0x90?, 0x3071ea0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc00112ee90?, 0xc0011e9754, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0014a27e0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0011329a0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000683a00?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000683a00, 0xc0011329a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:196 +0x36f
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0011e9c18?, {0x3932af0, 0xc0014de900}, 0x34d6f50, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x13c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0xc000e1cc80?, {0x3932af0?, 0xc0014de900?}, 0x1016f12?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc000e1cec0?, 0x3b9aca00, 0x1a3185c5000, {0xc000e1cd08?, 0x2c23600?, 0x3a53c00?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000683a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:201 +0x2de
testing.tRunner(0xc000683a00, 0xc000a08600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1779
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1795 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7f380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7f380, 0xc000546b80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1798 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7fa00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7fa00, 0xc000546d00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1793 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7f040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7f040, 0xc000546980)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1765 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014fc9c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0014ab4a0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0014fc9c0, 0x34d6008)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 505 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001174000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001174000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc001174000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc001174000, 0x34d5ed8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 511 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0011749c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0011749c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc0011749c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc0011749c0, 0x34d5f38)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1702 [chan receive, 34 minutes]:
testing.(*T).Run(0xc0014fc1a0, {0x3103d60?, 0x818edbcba0d?}, 0xc000f60240)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0014fc1a0, 0x34d5fc0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1776 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7e820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7e820, 0xc000546900)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 571 [IO wait, 116 minutes]:
internal/poll.runtime_pollWait(0x4c48c3c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0010f8f00?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0010f8f00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0010f8f00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00122c740)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc00122c740)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0004e9590, {0x393c820, 0xc00122c740})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0004e9590)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0011f0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 568
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x13a

                                                
                                                
goroutine 1781 [syscall, 5 minutes]:
syscall.syscall6(0x1010585?, 0xc0011eb7a0?, 0xc0011eb690?, 0xc0011eb7c0?, 0x100c0011eb788?, 0x1000000000003?, 0x4c2c5618?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0011eb738?, 0x1010905?, 0x90?, 0x3071ea0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0012f01a0?, 0xc0011eb76c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001418120)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000664000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0014fd040?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014fd040, 0xc000664000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestMissingContainerUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:322 +0x65
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0011ebb90?, {0x3932af0, 0xc001024b20}, 0x34d6f50, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x13c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x3103c20?, {0x3932af0?, 0xc001024b20?}, 0x1016f12?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0011ebeb0?, 0x3b9aca00, 0x1a3185c5000, {0xc0011ebc68?, 0x2c23600?, 0x3f2dd4f?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0014fd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:327 +0x558
testing.tRunner(0xc0014fd040, 0x34d5fa8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1794 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7f1e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7f1e0, 0xc000546b00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1775 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7e680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7e680, 0xc000546700)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1930 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x4cce8960, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00142c3c0?, 0xc0004dfdf3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00142c3c0, {0xc0004dfdf3, 0x20d, 0x20d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a28d38, {0xc0004dfdf3?, 0xc001365e68?, 0xc001365e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000ee20f0, {0x39259c0, 0xc000a28d38})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000ee20f0}, {0x39259c0, 0xc000a28d38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0014c4840?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1781
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 822 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 821
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1157 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0xc001355760, 0xc001326b40)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 700
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1932 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000664000, 0xc001326240)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1781
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1774 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000e7e4e0, 0xc000f60240)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1702
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1797 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7f6c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7f6c0, 0xc000546c80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 981 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011322c0, 0xc0012ac360)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 980
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 820 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000c05450, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00094b740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c05480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xba80ec834878246c?, {0x3926ee0, 0xc0007f3e90}, 0x1, 0xc000ab2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012b81e0?, 0x3b9aca00, 0x0, 0xd0?, 0x10446bc?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117c225?, 0xc000aaba20?, 0xc0012b8e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 795
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 795 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c05480, 0xc000ab2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 713
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1239 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0xc00148a360)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1232
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 1161 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0xc00140c420, 0xc001326ea0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1160
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1238 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0xc00148a360)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1232
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 1137 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0xc00124a9a0, 0xc00123e720)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1120
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1778 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014fc680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0014fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc0014fc680, 0x34d5fe8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 794 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00094b860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 713
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1779 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0014fcd00, {0x3107e0e?, 0x310e902?}, 0xc000a08600)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0014fcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:173 +0x305
testing.tRunner(0xc0014fcd00, 0x34d6010)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 821 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000ab2060}, 0xc001362f50, 0xc0012fb7f8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000ab2060}, 0x1?, 0x1?, 0xc001362fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000ab2060?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001362fd0?, 0x117c287?, 0xc0012b8d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 795
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1796 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000c075e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e7f520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e7f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000e7f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000e7f520, 0xc000546c00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1774
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1931 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cce8770, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00142c480?, 0xc0004f7400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00142c480, {0xc0004f7400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a28d80, {0xc0004f7400?, 0xc000e2e668?, 0xc000e2e668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000ee2120, {0x39259c0, 0xc000a28d80})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000ee2120}, {0x39259c0, 0xc000a28d80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0014c4120?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1781
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                    

Test pass (141/181)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.03
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.28.2/json-events 9.23
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.35
16 TestDownloadOnly/DeleteAll 0.65
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
18 TestDownloadOnlyKic 1.9
19 TestBinaryMirror 1.58
22 TestAddons/Setup 152.04
26 TestAddons/parallel/InspektorGadget 10.83
27 TestAddons/parallel/MetricsServer 5.88
28 TestAddons/parallel/HelmTiller 10.94
30 TestAddons/parallel/CSI 50.77
31 TestAddons/parallel/Headlamp 13.48
32 TestAddons/parallel/CloudSpanner 5.65
33 TestAddons/parallel/LocalPath 53.85
36 TestAddons/serial/GCPAuth/Namespaces 0.1
37 TestAddons/StoppedEnableDisable 11.74
48 TestErrorSpam/setup 21.78
49 TestErrorSpam/start 1.97
50 TestErrorSpam/status 1.18
51 TestErrorSpam/pause 1.67
52 TestErrorSpam/unpause 1.67
53 TestErrorSpam/stop 11.43
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 75.16
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 36.63
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/CacheCmd/cache/add_remote 4.89
65 TestFunctional/serial/CacheCmd/cache/add_local 1.81
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
67 TestFunctional/serial/CacheCmd/cache/list 0.06
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.31
70 TestFunctional/serial/CacheCmd/cache/delete 0.13
71 TestFunctional/serial/MinikubeKubectlCmd 0.57
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.75
73 TestFunctional/serial/ExtraConfig 38.06
74 TestFunctional/serial/ComponentHealth 0.06
75 TestFunctional/serial/LogsCmd 3.08
76 TestFunctional/serial/LogsFileCmd 3.16
77 TestFunctional/serial/InvalidService 4.23
79 TestFunctional/parallel/ConfigCmd 0.4
80 TestFunctional/parallel/DashboardCmd 12.37
81 TestFunctional/parallel/DryRun 1.48
82 TestFunctional/parallel/InternationalLanguage 0.67
83 TestFunctional/parallel/StatusCmd 1.22
88 TestFunctional/parallel/AddonsCmd 0.23
89 TestFunctional/parallel/PersistentVolumeClaim 28.52
91 TestFunctional/parallel/SSHCmd 0.77
92 TestFunctional/parallel/CpCmd 1.79
93 TestFunctional/parallel/MySQL 38.58
94 TestFunctional/parallel/FileSync 0.44
95 TestFunctional/parallel/CertSync 2.58
99 TestFunctional/parallel/NodeLabels 0.05
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
103 TestFunctional/parallel/License 0.53
104 TestFunctional/parallel/Version/short 0.14
105 TestFunctional/parallel/Version/components 0.92
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.98
111 TestFunctional/parallel/ImageCommands/Setup 2.84
112 TestFunctional/parallel/DockerEnv/bash 2.1
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.3
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.81
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.14
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.93
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.09
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.78
123 TestFunctional/parallel/ServiceCmd/DeployApp 17.2
124 TestFunctional/parallel/ServiceCmd/List 0.43
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
126 TestFunctional/parallel/ServiceCmd/HTTPS 15
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.19
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
138 TestFunctional/parallel/ServiceCmd/Format 15
139 TestFunctional/parallel/ServiceCmd/URL 15
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
141 TestFunctional/parallel/ProfileCmd/profile_list 0.46
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
143 TestFunctional/parallel/MountCmd/any-port 8.63
144 TestFunctional/parallel/MountCmd/specific-port 2.56
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.82
146 TestFunctional/delete_addon-resizer_images 0.14
147 TestFunctional/delete_my-image_image 0.05
148 TestFunctional/delete_minikube_cached_images 0.05
152 TestImageBuild/serial/Setup 21.39
153 TestImageBuild/serial/NormalBuild 1.81
154 TestImageBuild/serial/BuildWithBuildArg 1
155 TestImageBuild/serial/BuildWithDockerIgnore 0.74
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
166 TestJSONOutput/start/Command 75.13
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.55
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.62
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 10.94
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.78
191 TestKicCustomNetwork/create_custom_network 24.09
192 TestKicCustomNetwork/use_default_bridge_network 23.24
193 TestKicExistingNetwork 24.01
194 TestKicCustomSubnet 24.23
195 TestKicStaticIP 24.54
196 TestMainNoArgs 0.07
197 TestMinikubeProfile 50.93
200 TestMountStart/serial/StartWithMountFirst 7.23
201 TestMountStart/serial/VerifyMountFirst 0.37
202 TestMountStart/serial/StartWithMountSecond 7.53
203 TestMountStart/serial/VerifyMountSecond 0.38
204 TestMountStart/serial/DeleteFirst 2.06
205 TestMountStart/serial/VerifyMountPostDelete 0.37
206 TestMountStart/serial/Stop 1.56
207 TestMountStart/serial/RestartStopped 8.68
226 TestPreload 138.23
x
+
TestDownloadOnly/v1.16.0/json-events (18.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-439000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-439000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (18.025015751s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-439000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-439000: exit status 85 (281.264908ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-439000 | jenkins | v1.31.2 | 03 Oct 23 17:48 PDT |          |
	|         | -p download-only-439000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:48:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:48:54.999129   10865 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:48:54.999340   10865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:48:54.999345   10865 out.go:309] Setting ErrFile to fd 2...
	I1003 17:48:54.999349   10865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:48:54.999526   10865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	W1003 17:48:54.999620   10865 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17345-10413/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17345-10413/.minikube/config/config.json: no such file or directory
	I1003 17:48:55.001261   10865 out.go:303] Setting JSON to true
	I1003 17:48:55.022694   10865 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4703,"bootTime":1696375832,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 17:48:55.022781   10865 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:48:55.044622   10865 out.go:97] [download-only-439000] minikube v1.31.2 on Darwin 14.0
	I1003 17:48:55.066025   10865 out.go:169] MINIKUBE_LOCATION=17345
	W1003 17:48:55.044836   10865 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 17:48:55.044857   10865 notify.go:220] Checking for updates...
	I1003 17:48:55.109271   10865 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 17:48:55.129952   10865 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 17:48:55.151292   10865 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:48:55.172312   10865 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	W1003 17:48:55.214116   10865 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:48:55.214560   10865 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:48:55.271569   10865 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 17:48:55.271675   10865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:48:55.368786   10865 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-04 00:48:55.358835646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:48:55.389850   10865 out.go:97] Using the docker driver based on user configuration
	I1003 17:48:55.389880   10865 start.go:298] selected driver: docker
	I1003 17:48:55.389895   10865 start.go:902] validating driver "docker" against <nil>
	I1003 17:48:55.390079   10865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:48:55.489774   10865 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-04 00:48:55.478837063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:48:55.489946   10865 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:48:55.492875   10865 start_flags.go:384] Using suggested 5891MB memory alloc based on sys=32768MB, container=5939MB
	I1003 17:48:55.493010   10865 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:48:55.513891   10865 out.go:169] Using Docker Desktop driver with root privileges
	I1003 17:48:55.534965   10865 cni.go:84] Creating CNI manager for ""
	I1003 17:48:55.535002   10865 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:48:55.535020   10865 start_flags.go:321] config:
	{Name:download-only-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-439000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:48:55.556668   10865 out.go:97] Starting control plane node download-only-439000 in cluster download-only-439000
	I1003 17:48:55.556693   10865 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 17:48:55.577544   10865 out.go:97] Pulling base image ...
	I1003 17:48:55.577604   10865 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:48:55.577701   10865 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 17:48:55.630795   10865 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1003 17:48:55.631016   10865 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1003 17:48:55.631133   10865 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1003 17:48:55.640595   10865 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1003 17:48:55.640624   10865 cache.go:57] Caching tarball of preloaded images
	I1003 17:48:55.640807   10865 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:48:55.661785   10865 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1003 17:48:55.661804   10865 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 17:48:55.740540   10865 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1003 17:49:05.341758   10865 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 17:49:05.341936   10865 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 17:49:05.888811   10865 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:49:05.889040   10865 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/download-only-439000/config.json ...
	I1003 17:49:05.889061   10865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/download-only-439000/config.json: {Name:mk601f4ceef8ceddf0db3e91bb54c393e5297196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:49:05.889327   10865 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:49:05.889567   10865 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-439000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (9.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-439000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-439000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker : (9.227641684s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (9.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-439000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-439000: exit status 85 (347.650521ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-439000 | jenkins | v1.31.2 | 03 Oct 23 17:48 PDT |          |
	|         | -p download-only-439000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-439000 | jenkins | v1.31.2 | 03 Oct 23 17:49 PDT |          |
	|         | -p download-only-439000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:49:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:49:13.313433   10909 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:49:13.313642   10909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:49:13.313647   10909 out.go:309] Setting ErrFile to fd 2...
	I1003 17:49:13.313651   10909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:49:13.313830   10909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	W1003 17:49:13.313926   10909 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17345-10413/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17345-10413/.minikube/config/config.json: no such file or directory
	I1003 17:49:13.315369   10909 out.go:303] Setting JSON to true
	I1003 17:49:13.338176   10909 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4721,"bootTime":1696375832,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 17:49:13.338262   10909 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:49:13.358913   10909 out.go:97] [download-only-439000] minikube v1.31.2 on Darwin 14.0
	I1003 17:49:13.380099   10909 out.go:169] MINIKUBE_LOCATION=17345
	I1003 17:49:13.358999   10909 notify.go:220] Checking for updates...
	I1003 17:49:13.423865   10909 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 17:49:13.481023   10909 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 17:49:13.502033   10909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:49:13.523160   10909 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	W1003 17:49:13.564869   10909 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:49:13.565352   10909 config.go:182] Loaded profile config "download-only-439000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1003 17:49:13.565408   10909 start.go:810] api.Load failed for download-only-439000: filestore "download-only-439000": Docker machine "download-only-439000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 17:49:13.565518   10909 driver.go:373] Setting default libvirt URI to qemu:///system
	W1003 17:49:13.565546   10909 start.go:810] api.Load failed for download-only-439000: filestore "download-only-439000": Docker machine "download-only-439000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 17:49:13.624553   10909 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 17:49:13.624688   10909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:49:13.724975   10909 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-04 00:49:13.711330263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:49:13.745860   10909 out.go:97] Using the docker driver based on existing profile
	I1003 17:49:13.745888   10909 start.go:298] selected driver: docker
	I1003 17:49:13.745900   10909 start.go:902] validating driver "docker" against &{Name:download-only-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-439000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:49:13.746157   10909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:49:13.852335   10909 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-04 00:49:13.838520477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:49:13.855515   10909 cni.go:84] Creating CNI manager for ""
	I1003 17:49:13.855535   10909 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:49:13.855549   10909 start_flags.go:321] config:
	{Name:download-only-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-439000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:49:13.877008   10909 out.go:97] Starting control plane node download-only-439000 in cluster download-only-439000
	I1003 17:49:13.877049   10909 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 17:49:13.898575   10909 out.go:97] Pulling base image ...
	I1003 17:49:13.898638   10909 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:49:13.898721   10909 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1003 17:49:13.951145   10909 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1003 17:49:13.951378   10909 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1003 17:49:13.951413   10909 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1003 17:49:13.951419   10909 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1003 17:49:13.951429   10909 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1003 17:49:13.953090   10909 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 17:49:13.953101   10909 cache.go:57] Caching tarball of preloaded images
	I1003 17:49:13.953249   10909 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:49:13.973781   10909 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1003 17:49:13.973807   10909 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1003 17:49:14.049923   10909 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/17345-10413/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-439000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-439000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.9s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-742000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-742000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-742000
--- PASS: TestDownloadOnlyKic (1.90s)

                                                
                                    
x
+
TestBinaryMirror (1.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-261000 --alsologtostderr --binary-mirror http://127.0.0.1:57241 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-261000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-261000
--- PASS: TestBinaryMirror (1.58s)

                                                
                                    
x
+
TestAddons/Setup (152.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-890000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p addons-890000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.03962076s)
--- PASS: TestAddons/Setup (152.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rnqm9" [1c7152f0-7187-47ea-aff5-7068f3542861] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015310016s
addons_test.go:819: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-890000
addons_test.go:819: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-890000: (5.817854343s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 4.639811ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-lbbxr" [2ab3d05d-22cb-4408-a89a-576f64109e2d] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016329484s
addons_test.go:393: (dbg) Run:  kubectl --context addons-890000 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 4.159211ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-svf7c" [b1d319e9-6f3f-4f65-8c5e-5dfc87a45efa] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01306196s
addons_test.go:451: (dbg) Run:  kubectl --context addons-890000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-890000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.146270836s)
addons_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 5.058228ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-890000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-890000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4833d7f3-3b9c-4651-b9a9-76f05606c1cf] Pending
helpers_test.go:344: "task-pv-pod" [4833d7f3-3b9c-4651-b9a9-76f05606c1cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4833d7f3-3b9c-4651-b9a9-76f05606c1cf] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012620122s
addons_test.go:562: (dbg) Run:  kubectl --context addons-890000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-890000 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-890000 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-890000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-890000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [69e22f3a-277b-4020-92a4-4acaffb1f834] Pending
helpers_test.go:344: "task-pv-pod-restore" [69e22f3a-277b-4020-92a4-4acaffb1f834] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [69e22f3a-277b-4020-92a4-4acaffb1f834] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.030693663s
addons_test.go:604: (dbg) Run:  kubectl --context addons-890000 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-890000 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-890000 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-darwin-amd64 -p addons-890000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.273393916s)
addons_test.go:620: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:620: (dbg) Done: out/minikube-darwin-amd64 -p addons-890000 addons disable volumesnapshots --alsologtostderr -v=1: (1.038239623s)
--- PASS: TestAddons/parallel/CSI (50.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-890000 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-890000 --alsologtostderr -v=1: (1.462382848s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-m82dv" [94fcc4af-7ce4-40e8-b0ef-c7c78bfd4900] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-m82dv" [94fcc4af-7ce4-40e8-b0ef-c7c78bfd4900] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.011884025s
--- PASS: TestAddons/parallel/Headlamp (13.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-6brxg" [bb49724e-5650-4d06-b8f0-05ae4ee9c6d7] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010617722s
addons_test.go:838: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-890000
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-890000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-890000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [af23ab28-b990-49d1-8217-8c19bbd65d61] Pending
helpers_test.go:344: "test-local-path" [af23ab28-b990-49d1-8217-8c19bbd65d61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [af23ab28-b990-49d1-8217-8c19bbd65d61] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [af23ab28-b990-49d1-8217-8c19bbd65d61] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010091716s
addons_test.go:869: (dbg) Run:  kubectl --context addons-890000 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 ssh "cat /opt/local-path-provisioner/pvc-fc7dfdcb-6702-4ad0-8f57-7bdd4f2c12af_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-890000 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-890000 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-darwin-amd64 -p addons-890000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.461542168s)
--- PASS: TestAddons/parallel/LocalPath (53.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-890000 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-890000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-890000
addons_test.go:150: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-890000: (11.07126268s)
addons_test.go:154: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-890000
addons_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-890000
addons_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-890000
--- PASS: TestAddons/StoppedEnableDisable (11.74s)

                                                
                                    
x
+
TestErrorSpam/setup (21.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-289000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-289000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 --driver=docker : (21.777399861s)
--- PASS: TestErrorSpam/setup (21.78s)

                                                
                                    
x
+
TestErrorSpam/start (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 start --dry-run
--- PASS: TestErrorSpam/start (1.97s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (11.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 stop: (10.832033527s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-289000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-289000 stop
--- PASS: TestErrorSpam/stop (11.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17345-10413/.minikube/files/etc/test/nested/copy/10863/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-913000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m15.161351667s)
--- PASS: TestFunctional/serial/StartWithProxy (75.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-913000 --alsologtostderr -v=8: (36.631901557s)
functional_test.go:659: soft start took 36.632517584s for "functional-913000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-913000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:3.1: (1.655799175s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:3.3: (1.697665843s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 cache add registry.k8s.io/pause:latest: (1.536966207s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2342906848/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache add minikube-local-cache-test:functional-913000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 cache add minikube-local-cache-test:functional-913000: (1.194004272s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache delete minikube-local-cache-test:functional-913000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-913000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (380.256118ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 cache reload: (1.125040168s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 kubectl -- --context functional-913000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-913000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 17:56:59.764696   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:56:59.771453   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:56:59.781594   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:56:59.802399   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:56:59.844509   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:56:59.925919   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:00.086224   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:00.407925   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:01.048386   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:02.329281   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:04.889649   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:10.010152   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
E1003 17:57:20.252245   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-913000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.063802758s)
functional_test.go:757: restart took 38.063989229s for "functional-913000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-913000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 logs: (3.078439359s)
--- PASS: TestFunctional/serial/LogsCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2286108676/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2286108676/001/logs.txt: (3.155464636s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-913000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-913000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-913000: exit status 115 (545.689151ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30536 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-913000 delete -f testdata/invalidsvc.yaml
E1003 17:57:40.733581   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 config get cpus: exit status 14 (42.593478ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 config get cpus: exit status 14 (43.282705ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-913000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-913000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 13306: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-913000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (791.688225ms)

                                                
                                                
-- stdout --
	* [functional-913000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:59:16.145760   13231 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:59:16.146031   13231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:16.146038   13231 out.go:309] Setting ErrFile to fd 2...
	I1003 17:59:16.146042   13231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:16.146220   13231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 17:59:16.147958   13231 out.go:303] Setting JSON to false
	I1003 17:59:16.170614   13231 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5324,"bootTime":1696375832,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 17:59:16.170725   13231 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:59:16.195229   13231 out.go:177] * [functional-913000] minikube v1.31.2 on Darwin 14.0
	I1003 17:59:16.256846   13231 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:59:16.235978   13231 notify.go:220] Checking for updates...
	I1003 17:59:16.300618   13231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 17:59:16.321863   13231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 17:59:16.379621   13231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:59:16.437790   13231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 17:59:16.479742   13231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:59:16.501261   13231 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:59:16.501687   13231 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:59:16.564322   13231 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 17:59:16.564486   13231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:59:16.691499   13231 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-04 00:59:16.677177589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:59:16.713998   13231 out.go:177] * Using the docker driver based on existing profile
	I1003 17:59:16.786718   13231 start.go:298] selected driver: docker
	I1003 17:59:16.786732   13231 start.go:902] validating driver "docker" against &{Name:functional-913000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-913000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:59:16.786807   13231 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:59:16.810835   13231 out.go:177] 
	W1003 17:59:16.831883   13231 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 17:59:16.852693   13231 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-913000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-913000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (667.487916ms)

                                                
                                                
-- stdout --
	* [functional-913000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:59:17.614435   13275 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:59:17.614623   13275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:17.614628   13275 out.go:309] Setting ErrFile to fd 2...
	I1003 17:59:17.614632   13275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:59:17.614841   13275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
	I1003 17:59:17.616338   13275 out.go:303] Setting JSON to false
	I1003 17:59:17.638220   13275 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5325,"bootTime":1696375832,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1003 17:59:17.638332   13275 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:59:17.670147   13275 out.go:177] * [functional-913000] minikube v1.31.2 sur Darwin 14.0
	I1003 17:59:17.765274   13275 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:59:17.728542   13275 notify.go:220] Checking for updates...
	I1003 17:59:17.807475   13275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig
	I1003 17:59:17.849387   13275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 17:59:17.870526   13275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:59:17.892208   13275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube
	I1003 17:59:17.913710   13275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:59:17.936205   13275 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:59:17.937014   13275 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:59:17.993936   13275 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 17:59:17.994067   13275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 17:59:18.092554   13275 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-04 00:59:18.082059434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227595264 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 17:59:18.114499   13275 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1003 17:59:18.135570   13275 start.go:298] selected driver: docker
	I1003 17:59:18.135600   13275 start.go:902] validating driver "docker" against &{Name:functional-913000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-913000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:59:18.135712   13275 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:59:18.161579   13275 out.go:177] 
	W1003 17:59:18.183675   13275 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 17:59:18.205573   13275 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cbc14eef-0fe6-4bbb-b46a-e70829c5e40d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.017316676s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-913000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-913000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-913000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-913000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dcbecdc9-9572-4e28-8a95-23a5fcdb4c95] Pending
helpers_test.go:344: "sp-pod" [dcbecdc9-9572-4e28-8a95-23a5fcdb4c95] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dcbecdc9-9572-4e28-8a95-23a5fcdb4c95] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.014620329s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-913000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-913000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-913000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [67ff9ce6-052b-4de2-a439-8a850088fa8e] Pending
helpers_test.go:344: "sp-pod" [67ff9ce6-052b-4de2-a439-8a850088fa8e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [67ff9ce6-052b-4de2-a439-8a850088fa8e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.014461483s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-913000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh -n functional-913000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 cp functional-913000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd3681146453/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh -n functional-913000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (38.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-913000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pxhst" [574bcd21-25e2-4e71-899b-ac50c506dc3a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pxhst" [574bcd21-25e2-4e71-899b-ac50c506dc3a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.023077821s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;": exit status 1 (127.326795ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1003 17:58:21.694873   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;": exit status 1 (137.299264ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;": exit status 1 (128.752987ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-913000 exec mysql-859648c796-pxhst -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (38.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10863/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /etc/test/nested/copy/10863/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10863.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /etc/ssl/certs/10863.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10863.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /usr/share/ca-certificates/10863.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/108632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /etc/ssl/certs/108632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/108632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /usr/share/ca-certificates/108632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-913000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh "sudo systemctl is-active crio": exit status 1 (523.888177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-913000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-913000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-913000
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-913000 image ls --format short --alsologtostderr:
I1003 17:59:28.553942   13536 out.go:296] Setting OutFile to fd 1 ...
I1003 17:59:28.554219   13536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:28.554224   13536 out.go:309] Setting ErrFile to fd 2...
I1003 17:59:28.554228   13536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:28.554401   13536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 17:59:28.555024   13536 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:28.555115   13536 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:28.555519   13536 cli_runner.go:164] Run: docker container inspect functional-913000 --format={{.State.Status}}
I1003 17:59:28.613764   13536 ssh_runner.go:195] Run: systemctl --version
I1003 17:59:28.613873   13536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913000
I1003 17:59:28.688281   13536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57844 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/functional-913000/id_rsa Username:docker}
I1003 17:59:28.787815   13536 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-913000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-913000 | 06b316444eb0c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-913000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-913000 image ls --format table --alsologtostderr:
I1003 17:59:31.034479   13573 out.go:296] Setting OutFile to fd 1 ...
I1003 17:59:31.034828   13573 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:31.034834   13573 out.go:309] Setting ErrFile to fd 2...
I1003 17:59:31.034839   13573 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:31.035046   13573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 17:59:31.035800   13573 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:31.035913   13573 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:31.036372   13573 cli_runner.go:164] Run: docker container inspect functional-913000 --format={{.State.Status}}
I1003 17:59:31.093049   13573 ssh_runner.go:195] Run: systemctl --version
I1003 17:59:31.093122   13573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913000
I1003 17:59:31.143711   13573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57844 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/functional-913000/id_rsa Username:docker}
I1003 17:59:31.236808   13573 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-913000 image ls --format json --alsologtostderr:
[{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"d571254277f6a0ba9d0c4a08f29
b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"06b316444eb0cb651155ecec04e86f81d627f0671074d16bd0f07c8924f80b70","repoDigests":[],"repoTags":["d
ocker.io/library/minikube-local-cache-test:functional-913000"],"size":"30"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-913000"],"size":"32900000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u
003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-913000 image ls --format json --alsologtostderr:
I1003 17:59:30.664153   13567 out.go:296] Setting OutFile to fd 1 ...
I1003 17:59:30.664553   13567 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:30.664560   13567 out.go:309] Setting ErrFile to fd 2...
I1003 17:59:30.664564   13567 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:30.664777   13567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 17:59:30.665525   13567 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:30.665624   13567 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:30.666096   13567 cli_runner.go:164] Run: docker container inspect functional-913000 --format={{.State.Status}}
I1003 17:59:30.728464   13567 ssh_runner.go:195] Run: systemctl --version
I1003 17:59:30.728557   13567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913000
I1003 17:59:30.787143   13567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57844 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/functional-913000/id_rsa Username:docker}
I1003 17:59:30.923216   13567 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-913000 image ls --format yaml --alsologtostderr:
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-913000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 06b316444eb0cb651155ecec04e86f81d627f0671074d16bd0f07c8924f80b70
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-913000
size: "30"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-913000 image ls --format yaml --alsologtostderr:
I1003 17:59:28.907721   13542 out.go:296] Setting OutFile to fd 1 ...
I1003 17:59:28.907963   13542 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:28.907968   13542 out.go:309] Setting ErrFile to fd 2...
I1003 17:59:28.907972   13542 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:28.908236   13542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 17:59:28.909020   13542 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:28.909178   13542 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:28.909627   13542 cli_runner.go:164] Run: docker container inspect functional-913000 --format={{.State.Status}}
I1003 17:59:28.968927   13542 ssh_runner.go:195] Run: systemctl --version
I1003 17:59:28.969019   13542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913000
I1003 17:59:29.026276   13542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57844 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/functional-913000/id_rsa Username:docker}
I1003 17:59:29.122379   13542 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh pgrep buildkitd: exit status 1 (348.27503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image build -t localhost/my-image:functional-913000 testdata/build --alsologtostderr
2023/10/03 17:59:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image build -t localhost/my-image:functional-913000 testdata/build --alsologtostderr: (2.342398271s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-913000 image build -t localhost/my-image:functional-913000 testdata/build --alsologtostderr:
I1003 17:59:29.562550   13560 out.go:296] Setting OutFile to fd 1 ...
I1003 17:59:29.562837   13560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:29.562842   13560 out.go:309] Setting ErrFile to fd 2...
I1003 17:59:29.562846   13560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:59:29.563023   13560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-10413/.minikube/bin
I1003 17:59:29.563630   13560 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:29.564202   13560 config.go:182] Loaded profile config "functional-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:59:29.564623   13560 cli_runner.go:164] Run: docker container inspect functional-913000 --format={{.State.Status}}
I1003 17:59:29.615466   13560 ssh_runner.go:195] Run: systemctl --version
I1003 17:59:29.615540   13560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913000
I1003 17:59:29.667529   13560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57844 SSHKeyPath:/Users/jenkins/minikube-integration/17345-10413/.minikube/machines/functional-913000/id_rsa Username:docker}
I1003 17:59:29.761693   13560 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2840804365.tar
I1003 17:59:29.761779   13560 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 17:59:29.771598   13560 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2840804365.tar
I1003 17:59:29.775944   13560 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2840804365.tar: stat -c "%s %y" /var/lib/minikube/build/build.2840804365.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2840804365.tar': No such file or directory
I1003 17:59:29.775973   13560 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2840804365.tar --> /var/lib/minikube/build/build.2840804365.tar (3072 bytes)
I1003 17:59:29.800220   13560 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2840804365
I1003 17:59:29.809893   13560 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2840804365 -xf /var/lib/minikube/build/build.2840804365.tar
I1003 17:59:29.819447   13560 docker.go:340] Building image: /var/lib/minikube/build/build.2840804365
I1003 17:59:29.819522   13560 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-913000 /var/lib/minikube/build/build.2840804365
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8bca4c85d7d3282eec734bae4d575035918ace60b7468844078b5f516cc30e9c done
#8 naming to localhost/my-image:functional-913000 done
#8 DONE 0.0s
I1003 17:59:31.818517   13560 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-913000 /var/lib/minikube/build/build.2840804365: (1.9988493s)
I1003 17:59:31.818572   13560 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2840804365
I1003 17:59:31.828652   13560 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2840804365.tar
I1003 17:59:31.837874   13560 build_images.go:207] Built localhost/my-image:functional-913000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2840804365.tar
I1003 17:59:31.837909   13560 build_images.go:123] succeeded building to: functional-913000
I1003 17:59:31.837913   13560 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.761649829s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-913000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-913000 docker-env) && out/minikube-darwin-amd64 status -p functional-913000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-913000 docker-env) && out/minikube-darwin-amd64 status -p functional-913000": (1.25390682s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-913000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr: (4.018523846s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr: (2.299640863s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.5077737s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-913000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image load --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr: (4.22561686s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image save gcr.io/google-containers/addon-resizer:functional-913000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image save gcr.io/google-containers/addon-resizer:functional-913000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.92964315s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image rm gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.777339434s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-913000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 image save --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-913000 image save --daemon gcr.io/google-containers/addon-resizer:functional-913000 --alsologtostderr: (1.664081469s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-913000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-913000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-913000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-2pcmh" [39db3173-b606-48c9-a214-85d5e3789651] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-2pcmh" [39db3173-b606-48c9-a214-85d5e3789651] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.015220085s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 service list -o json
functional_test.go:1493: Took "417.088173ms" to run "out/minikube-darwin-amd64 -p functional-913000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 service --namespace=default --https --url hello-node: signal: killed (15.004390214s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58084

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:58084
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13007: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-913000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [54e51309-fb5b-4381-a982-39f70a338e29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [54e51309-fb5b-4381-a982-39f70a338e29] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.01479942s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-913000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13036: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 service hello-node --url --format={{.IP}}: signal: killed (15.002403638s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 service hello-node --url: signal: killed (15.004462146s)

                                                
                                                
-- stdout --
	http://127.0.0.1:58157

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:58157
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "392.125525ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "65.899974ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "390.360242ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "63.78226ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port653279400/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696381152588649000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port653279400/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696381152588649000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port653279400/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696381152588649000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port653279400/001/test-1696381152588649000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (370.281821ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 00:59 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 00:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 00:59 test-1696381152588649000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh cat /mount-9p/test-1696381152588649000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-913000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d0b32bdb-5b1e-41f3-8ad0-3ccaf2d47276] Pending
helpers_test.go:344: "busybox-mount" [d0b32bdb-5b1e-41f3-8ad0-3ccaf2d47276] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d0b32bdb-5b1e-41f3-8ad0-3ccaf2d47276] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d0b32bdb-5b1e-41f3-8ad0-3ccaf2d47276] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.070086794s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-913000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port653279400/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1878468135/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (383.290653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1878468135/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh "sudo umount -f /mount-9p": exit status 1 (396.384828ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-913000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1878468135/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T" /mount1: exit status 1 (593.243829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-913000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-913000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-913000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup869340026/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.82s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-913000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-913000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-913000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-337000 --driver=docker 
E1003 17:59:43.618111   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-337000 --driver=docker : (21.393461451s)
--- PASS: TestImageBuild/serial/Setup (21.39s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-337000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-337000: (1.811939244s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-337000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-337000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-337000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-372000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-372000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m15.127906113s)
--- PASS: TestJSONOutput/start/Command (75.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-372000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-372000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-372000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-372000 --output=json --user=testUser: (10.938548987s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-925000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-925000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (407.664298ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6be0525f-7605-493b-a471-abd006086f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-925000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92223f40-d2f6-4c90-b419-a0c545a24e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17345"}}
	{"specversion":"1.0","id":"1406c98b-df69-4a3b-a600-3a48d6614a03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17345-10413/kubeconfig"}}
	{"specversion":"1.0","id":"305457e6-d9af-43a1-8f5d-5fb531945a36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"deb1dbd8-c06e-4be0-97c2-4fa769fa95df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a07f679d-fbdf-4552-92e5-d3188f5e7a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-10413/.minikube"}}
	{"specversion":"1.0","id":"83f1d4d4-a38c-4b4b-a6e3-986eb3b12b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d375bdd5-7a49-4016-a4ab-1e67e52ee70d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-925000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-925000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-163000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-163000 --network=: (21.590398741s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-163000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-163000: (2.448302016s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-088000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-088000 --network=bridge: (20.899800264s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-088000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-088000: (2.284567281s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.24s)

                                                
                                    
x
+
TestKicExistingNetwork (24.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-571000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-571000 --network=existing-network: (21.384772467s)
helpers_test.go:175: Cleaning up "existing-network-571000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-571000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-571000: (2.278501078s)
--- PASS: TestKicExistingNetwork (24.01s)

                                                
                                    
x
+
TestKicCustomSubnet (24.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-056000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-056000 --subnet=192.168.60.0/24: (21.675683645s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-056000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-056000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-056000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-056000: (2.502261902s)
--- PASS: TestKicCustomSubnet (24.23s)

                                                
                                    
x
+
TestKicStaticIP (24.54s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-694000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-694000 --static-ip=192.168.200.200: (21.908342497s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-694000 ip
helpers_test.go:175: Cleaning up "static-ip-694000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-694000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-694000: (2.418699084s)
--- PASS: TestKicStaticIP (24.54s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (50.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-696000 --driver=docker 
E1003 18:11:59.794629   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/addons-890000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-696000 --driver=docker : (21.830165346s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-698000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-698000 --driver=docker : (22.536413197s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-696000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-698000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-698000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-698000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-698000: (2.445060741s)
helpers_test.go:175: Cleaning up "first-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-696000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-696000: (2.444576937s)
--- PASS: TestMinikubeProfile (50.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-755000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E1003 18:12:48.008879   10863 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-10413/.minikube/profiles/functional-913000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-755000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.224914345s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-755000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-772000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-772000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.5237043s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-772000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-755000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-755000 --alsologtostderr -v=5: (2.058369855s)
--- PASS: TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-772000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-772000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-772000: (1.564272647s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-772000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-772000: (7.679866905s)
--- PASS: TestMountStart/serial/RestartStopped (8.68s)

                                                
                                    
x
+
TestPreload (138.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-789000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-789000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m16.446189817s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-789000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-789000 image pull gcr.io/k8s-minikube/busybox: (1.587045909s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-789000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-789000: (10.874773976s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-789000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-789000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (46.464704349s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-789000 image list
helpers_test.go:175: Cleaning up "test-preload-789000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-789000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-789000: (2.517729568s)
--- PASS: TestPreload (138.23s)

                                                
                                    

Test skip (17/181)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 13.823373ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-krlfd" [d8a97afe-5af8-426b-ae9c-19251c78c34f] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014538829s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-49dlt" [f0725f08-62a0-4746-9b50-fff7d11712d9] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012705705s
addons_test.go:318: (dbg) Run:  kubectl --context addons-890000 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-890000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-890000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.88923666s)
addons_test.go:333: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (13.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-890000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-890000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-890000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66792a68-ea56-4996-ba50-0f4a463f1e72] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66792a68-ea56-4996-ba50-0f4a463f1e72] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.011916171s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p addons-890000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (13.89s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-913000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-913000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wghc2" [7ca412db-25fe-49c3-893d-d5afb236abf7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wghc2" [7ca412db-25fe-49c3-893d-d5afb236abf7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.011889722s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard