Test Report: Docker_macOS 15003

                    
                      8cf175ff8162c9e1537c51b4f60112d1d789e51d:2023-06-13:29694
                    
                

Test fail (15/316)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (277.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0613 11:54:22.779654   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:56:38.935326   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:56:42.366927   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.373381   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.384776   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.405175   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.445783   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.526001   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.687986   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:43.008158   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:43.648633   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:44.930892   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:47.492112   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:52.614049   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:57:02.856613   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:57:06.626790   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:57:23.337519   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:58:04.300215   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m37.650451816s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-779000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-779000 in cluster ingress-addon-legacy-779000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 11:54:10.823011   23427 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:54:10.823179   23427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:54:10.823185   23427 out.go:309] Setting ErrFile to fd 2...
	I0613 11:54:10.823190   23427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:54:10.823303   23427 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 11:54:10.824801   23427 out.go:303] Setting JSON to false
	I0613 11:54:10.843964   23427 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6821,"bootTime":1686675629,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 11:54:10.844056   23427 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 11:54:10.865606   23427 out.go:177] * [ingress-addon-legacy-779000] minikube v1.30.1 on Darwin 13.4
	I0613 11:54:10.908628   23427 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 11:54:10.908583   23427 notify.go:220] Checking for updates...
	I0613 11:54:10.930862   23427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 11:54:10.952570   23427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 11:54:10.973502   23427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 11:54:10.995609   23427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 11:54:11.017532   23427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 11:54:11.039190   23427 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 11:54:11.097187   23427 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 11:54:11.097315   23427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:54:11.192011   23427 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:54:11.181285431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:54:11.213970   23427 out.go:177] * Using the docker driver based on user configuration
	I0613 11:54:11.235819   23427 start.go:297] selected driver: docker
	I0613 11:54:11.235846   23427 start.go:884] validating driver "docker" against <nil>
	I0613 11:54:11.235866   23427 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 11:54:11.239933   23427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:54:11.333628   23427 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:54:11.321874773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:54:11.333789   23427 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0613 11:54:11.333972   23427 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0613 11:54:11.357447   23427 out.go:177] * Using Docker Desktop driver with root privileges
	I0613 11:54:11.378212   23427 cni.go:84] Creating CNI manager for ""
	I0613 11:54:11.378260   23427 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 11:54:11.378272   23427 start_flags.go:319] config:
	{Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:54:11.399045   23427 out.go:177] * Starting control plane node ingress-addon-legacy-779000 in cluster ingress-addon-legacy-779000
	I0613 11:54:11.441496   23427 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 11:54:11.463240   23427 out.go:177] * Pulling base image ...
	I0613 11:54:11.506547   23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0613 11:54:11.506583   23427 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 11:54:11.557336   23427 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 11:54:11.557361   23427 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 11:54:11.605208   23427 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0613 11:54:11.605227   23427 cache.go:57] Caching tarball of preloaded images
	I0613 11:54:11.605482   23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0613 11:54:11.627161   23427 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0613 11:54:11.670227   23427 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:54:11.887437   23427 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0613 11:54:27.746063   23427 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:54:27.746259   23427 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:54:28.367884   23427 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0613 11:54:28.368143   23427 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json ...
	I0613 11:54:28.368172   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json: {Name:mk38925f429f1551ce8de16609abb39837213218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:28.368489   23427 cache.go:195] Successfully downloaded all kic artifacts
	I0613 11:54:28.368512   23427 start.go:365] acquiring machines lock for ingress-addon-legacy-779000: {Name:mk814d28bdc1de21db092a373c6c7d9d40f769d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 11:54:28.368651   23427 start.go:369] acquired machines lock for "ingress-addon-legacy-779000" in 131.826µs
	I0613 11:54:28.368672   23427 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 11:54:28.368725   23427 start.go:125] createHost starting for "" (driver="docker")
	I0613 11:54:28.391423   23427 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0613 11:54:28.391755   23427 start.go:159] libmachine.API.Create for "ingress-addon-legacy-779000" (driver="docker")
	I0613 11:54:28.391806   23427 client.go:168] LocalClient.Create starting
	I0613 11:54:28.391997   23427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem
	I0613 11:54:28.392070   23427 main.go:141] libmachine: Decoding PEM data...
	I0613 11:54:28.392103   23427 main.go:141] libmachine: Parsing certificate...
	I0613 11:54:28.392232   23427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem
	I0613 11:54:28.392291   23427 main.go:141] libmachine: Decoding PEM data...
	I0613 11:54:28.392309   23427 main.go:141] libmachine: Parsing certificate...
	I0613 11:54:28.412280   23427 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-779000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0613 11:54:28.465965   23427 cli_runner.go:211] docker network inspect ingress-addon-legacy-779000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0613 11:54:28.466089   23427 network_create.go:281] running [docker network inspect ingress-addon-legacy-779000] to gather additional debugging logs...
	I0613 11:54:28.466107   23427 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-779000
	W0613 11:54:28.516643   23427 cli_runner.go:211] docker network inspect ingress-addon-legacy-779000 returned with exit code 1
	I0613 11:54:28.516666   23427 network_create.go:284] error running [docker network inspect ingress-addon-legacy-779000]: docker network inspect ingress-addon-legacy-779000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-779000 not found
	I0613 11:54:28.516691   23427 network_create.go:286] output of [docker network inspect ingress-addon-legacy-779000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-779000 not found
	
	** /stderr **
	I0613 11:54:28.516778   23427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0613 11:54:28.566767   23427 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00079aed0}
	I0613 11:54:28.566806   23427 network_create.go:123] attempt to create docker network ingress-addon-legacy-779000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0613 11:54:28.566887   23427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 ingress-addon-legacy-779000
	I0613 11:54:28.649238   23427 network_create.go:107] docker network ingress-addon-legacy-779000 192.168.49.0/24 created
	I0613 11:54:28.649273   23427 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-779000" container
	I0613 11:54:28.649383   23427 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0613 11:54:28.697876   23427 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-779000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --label created_by.minikube.sigs.k8s.io=true
	I0613 11:54:28.747999   23427 oci.go:103] Successfully created a docker volume ingress-addon-legacy-779000
	I0613 11:54:28.748144   23427 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-779000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --entrypoint /usr/bin/test -v ingress-addon-legacy-779000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0613 11:54:29.141540   23427 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-779000
	I0613 11:54:29.141578   23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0613 11:54:29.141593   23427 kic.go:190] Starting extracting preloaded images to volume ...
	I0613 11:54:29.141735   23427 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-779000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0613 11:54:35.139433   23427 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-779000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.99739474s)
	I0613 11:54:35.139459   23427 kic.go:199] duration metric: took 5.997683 seconds to extract preloaded images to volume
	I0613 11:54:35.139591   23427 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0613 11:54:35.242781   23427 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-779000 --name ingress-addon-legacy-779000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --network ingress-addon-legacy-779000 --ip 192.168.49.2 --volume ingress-addon-legacy-779000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0613 11:54:35.525225   23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Running}}
	I0613 11:54:35.578891   23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
	I0613 11:54:35.638705   23427 cli_runner.go:164] Run: docker exec ingress-addon-legacy-779000 stat /var/lib/dpkg/alternatives/iptables
	I0613 11:54:35.745576   23427 oci.go:144] the created container "ingress-addon-legacy-779000" has a running status.
	I0613 11:54:35.745621   23427 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa...
	I0613 11:54:36.010741   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0613 11:54:36.010822   23427 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0613 11:54:36.072127   23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
	I0613 11:54:36.126364   23427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0613 11:54:36.126383   23427 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-779000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0613 11:54:36.217614   23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
	I0613 11:54:36.268828   23427 machine.go:88] provisioning docker machine ...
	I0613 11:54:36.268873   23427 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-779000"
	I0613 11:54:36.268989   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:36.320226   23427 main.go:141] libmachine: Using SSH client type: native
	I0613 11:54:36.320614   23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 56371 <nil> <nil>}
	I0613 11:54:36.320630   23427 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-779000 && echo "ingress-addon-legacy-779000" | sudo tee /etc/hostname
	I0613 11:54:36.449082   23427 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-779000
	
	I0613 11:54:36.449174   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:36.499205   23427 main.go:141] libmachine: Using SSH client type: native
	I0613 11:54:36.499557   23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 56371 <nil> <nil>}
	I0613 11:54:36.499571   23427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-779000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-779000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-779000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 11:54:36.618985   23427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 11:54:36.619010   23427 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 11:54:36.619029   23427 ubuntu.go:177] setting up certificates
	I0613 11:54:36.619043   23427 provision.go:83] configureAuth start
	I0613 11:54:36.619132   23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
	I0613 11:54:36.668936   23427 provision.go:138] copyHostCerts
	I0613 11:54:36.668987   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 11:54:36.669048   23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 11:54:36.669059   23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 11:54:36.669202   23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 11:54:36.669413   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 11:54:36.669471   23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 11:54:36.669476   23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 11:54:36.669542   23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 11:54:36.669676   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 11:54:36.669717   23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 11:54:36.669722   23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 11:54:36.669782   23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 11:54:36.669923   23427 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-779000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-779000]
	I0613 11:54:36.732525   23427 provision.go:172] copyRemoteCerts
	I0613 11:54:36.732593   23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 11:54:36.732648   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:36.783106   23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:54:36.872436   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0613 11:54:36.872515   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 11:54:36.894516   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0613 11:54:36.894587   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0613 11:54:36.916348   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0613 11:54:36.916419   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0613 11:54:36.938462   23427 provision.go:86] duration metric: configureAuth took 319.396158ms
	I0613 11:54:36.938480   23427 ubuntu.go:193] setting minikube options for container-runtime
	I0613 11:54:36.938637   23427 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0613 11:54:36.938704   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:36.990530   23427 main.go:141] libmachine: Using SSH client type: native
	I0613 11:54:36.990878   23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 56371 <nil> <nil>}
	I0613 11:54:36.990904   23427 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 11:54:37.110800   23427 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 11:54:37.110815   23427 ubuntu.go:71] root file system type: overlay
	I0613 11:54:37.110907   23427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 11:54:37.110988   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:37.160524   23427 main.go:141] libmachine: Using SSH client type: native
	I0613 11:54:37.160864   23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 56371 <nil> <nil>}
	I0613 11:54:37.160912   23427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 11:54:37.287788   23427 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 11:54:37.287885   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:37.338339   23427 main.go:141] libmachine: Using SSH client type: native
	I0613 11:54:37.338683   23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 56371 <nil> <nil>}
	I0613 11:54:37.338696   23427 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 11:54:38.001127   23427 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-05-25 21:51:00.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 18:54:37.284599717 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0613 11:54:38.001155   23427 machine.go:91] provisioned docker machine in 1.732254071s
	I0613 11:54:38.001164   23427 client.go:171] LocalClient.Create took 9.609063576s
	I0613 11:54:38.001181   23427 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-779000" took 9.609140761s
	I0613 11:54:38.001192   23427 start.go:300] post-start starting for "ingress-addon-legacy-779000" (driver="docker")
	I0613 11:54:38.001205   23427 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 11:54:38.001292   23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 11:54:38.001360   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:38.052179   23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:54:38.142086   23427 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 11:54:38.146206   23427 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 11:54:38.146234   23427 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 11:54:38.146242   23427 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 11:54:38.146246   23427 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 11:54:38.146255   23427 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 11:54:38.146346   23427 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 11:54:38.146539   23427 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 11:54:38.146546   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> /etc/ssl/certs/208002.pem
	I0613 11:54:38.146726   23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 11:54:38.155691   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 11:54:38.177454   23427 start.go:303] post-start completed in 176.240976ms
	I0613 11:54:38.177991   23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
	I0613 11:54:38.227190   23427 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json ...
	I0613 11:54:38.227644   23427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 11:54:38.227713   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:38.277216   23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:54:38.362769   23427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 11:54:38.368057   23427 start.go:128] duration metric: createHost completed in 9.999019363s
	I0613 11:54:38.368077   23427 start.go:83] releasing machines lock for "ingress-addon-legacy-779000", held for 9.999118341s
	I0613 11:54:38.368177   23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
	I0613 11:54:38.418890   23427 ssh_runner.go:195] Run: cat /version.json
	I0613 11:54:38.418936   23427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 11:54:38.418963   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:38.419017   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:38.474553   23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:54:38.474584   23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:54:38.662799   23427 ssh_runner.go:195] Run: systemctl --version
	I0613 11:54:38.668234   23427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 11:54:38.673574   23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 11:54:38.696647   23427 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 11:54:38.696720   23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0613 11:54:38.712847   23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0613 11:54:38.729063   23427 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0613 11:54:38.729077   23427 start.go:464] detecting cgroup driver to use...
	I0613 11:54:38.729092   23427 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 11:54:38.729204   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 11:54:38.744995   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0613 11:54:38.755045   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 11:54:38.764925   23427 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 11:54:38.764985   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 11:54:38.774968   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 11:54:38.784895   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 11:54:38.794562   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 11:54:38.804464   23427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 11:54:38.813979   23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 11:54:38.824108   23427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 11:54:38.832848   23427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 11:54:38.841748   23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 11:54:38.911095   23427 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 11:54:38.989458   23427 start.go:464] detecting cgroup driver to use...
	I0613 11:54:38.989477   23427 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 11:54:38.989541   23427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 11:54:39.001198   23427 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 11:54:39.001268   23427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 11:54:39.013274   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 11:54:39.031801   23427 ssh_runner.go:195] Run: which cri-dockerd
	I0613 11:54:39.036785   23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 11:54:39.047281   23427 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 11:54:39.065747   23427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 11:54:39.166831   23427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 11:54:39.259644   23427 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 11:54:39.259663   23427 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 11:54:39.277329   23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 11:54:39.369042   23427 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 11:54:39.617663   23427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 11:54:39.644691   23427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 11:54:39.718212   23427 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0613 11:54:39.718413   23427 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-779000 dig +short host.docker.internal
	I0613 11:54:39.825476   23427 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 11:54:39.825597   23427 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 11:54:39.830659   23427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 11:54:39.842029   23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:54:39.894646   23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0613 11:54:39.894733   23427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 11:54:39.916152   23427 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0613 11:54:39.916175   23427 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0613 11:54:39.916252   23427 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 11:54:39.925603   23427 ssh_runner.go:195] Run: which lz4
	I0613 11:54:39.929999   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0613 11:54:39.930133   23427 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0613 11:54:39.934359   23427 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0613 11:54:39.934385   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0613 11:54:45.825419   23427 docker.go:600] Took 5.895175 seconds to copy over tarball
	I0613 11:54:45.849756   23427 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0613 11:54:48.238625   23427 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.388770785s)
	I0613 11:54:48.238641   23427 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0613 11:54:48.320737   23427 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 11:54:48.330050   23427 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0613 11:54:48.346119   23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 11:54:48.416801   23427 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 11:54:49.697923   23427 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.281061948s)
	I0613 11:54:49.698031   23427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 11:54:49.719059   23427 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0613 11:54:49.719080   23427 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0613 11:54:49.719088   23427 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0613 11:54:49.725175   23427 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0613 11:54:49.725175   23427 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 11:54:49.725485   23427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0613 11:54:49.726360   23427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0613 11:54:49.726451   23427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0613 11:54:49.726570   23427 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0613 11:54:49.726929   23427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0613 11:54:49.727173   23427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0613 11:54:49.732746   23427 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0613 11:54:49.732942   23427 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 11:54:49.733884   23427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0613 11:54:49.734147   23427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0613 11:54:49.734409   23427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0613 11:54:49.735921   23427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0613 11:54:49.736198   23427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0613 11:54:49.736818   23427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0613 11:54:50.861250   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0613 11:54:50.883845   23427 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0613 11:54:50.883889   23427 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0613 11:54:50.883949   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0613 11:54:50.905495   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0613 11:54:51.093369   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 11:54:51.379018   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0613 11:54:51.401377   23427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0613 11:54:51.401411   23427 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0613 11:54:51.401484   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0613 11:54:51.424520   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0613 11:54:51.455927   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0613 11:54:51.480588   23427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0613 11:54:51.480642   23427 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0613 11:54:51.480708   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0613 11:54:51.504710   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0613 11:54:51.630996   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0613 11:54:51.653265   23427 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0613 11:54:51.653305   23427 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0613 11:54:51.653374   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0613 11:54:51.676893   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0613 11:54:51.861415   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0613 11:54:51.885139   23427 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0613 11:54:51.885164   23427 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0613 11:54:51.885218   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0613 11:54:51.909080   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0613 11:54:52.169698   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0613 11:54:52.192142   23427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0613 11:54:52.192176   23427 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0613 11:54:52.192243   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0613 11:54:52.213451   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0613 11:54:52.388921   23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0613 11:54:52.410515   23427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0613 11:54:52.410544   23427 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0613 11:54:52.410623   23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0613 11:54:52.430816   23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0613 11:54:52.430865   23427 cache_images.go:92] LoadImages completed in 2.711687681s
	W0613 11:54:52.430914   23427 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0613 11:54:52.430988   23427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 11:54:52.480864   23427 cni.go:84] Creating CNI manager for ""
	I0613 11:54:52.480881   23427 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 11:54:52.480899   23427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 11:54:52.480915   23427 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-779000 NodeName:ingress-addon-legacy-779000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0613 11:54:52.481030   23427 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-779000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 11:54:52.481111   23427 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-779000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 11:54:52.481175   23427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0613 11:54:52.490418   23427 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 11:54:52.490489   23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 11:54:52.499380   23427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0613 11:54:52.515605   23427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0613 11:54:52.532083   23427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0613 11:54:52.548559   23427 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0613 11:54:52.553083   23427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 11:54:52.564320   23427 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000 for IP: 192.168.49.2
	I0613 11:54:52.564339   23427 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:52.564519   23427 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 11:54:52.564583   23427 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 11:54:52.564634   23427 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key
	I0613 11:54:52.564652   23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt with IP's: []
	I0613 11:54:52.849299   23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt ...
	I0613 11:54:52.849314   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt: {Name:mk12778a0174bdc1fc09c0d55a6fd7f3d05cd83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:52.849629   23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key ...
	I0613 11:54:52.849637   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key: {Name:mk8b6b19d254a0fe5245af650025a25a6b542746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:52.849840   23427 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2
	I0613 11:54:52.849854   23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0613 11:54:52.944884   23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 ...
	I0613 11:54:52.944892   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2: {Name:mk19b0fd53d494d349d0be176f5bfefb19d62dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:52.945154   23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2 ...
	I0613 11:54:52.945161   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2: {Name:mk7e62afc08dc057bfa6dde33979b944dc9d3fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:52.945382   23427 certs.go:337] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt
	I0613 11:54:52.945581   23427 certs.go:341] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key
	I0613 11:54:52.945776   23427 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key
	I0613 11:54:52.945788   23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt with IP's: []
	I0613 11:54:53.055872   23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt ...
	I0613 11:54:53.055880   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt: {Name:mkf9bc2529b4c3414209a490a1381c41eb01337c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:53.056095   23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key ...
	I0613 11:54:53.056103   23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key: {Name:mk29cff19e77fa275e8a69816ee1f8fe0d9310f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:54:53.056294   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0613 11:54:53.056324   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0613 11:54:53.056345   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0613 11:54:53.056365   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0613 11:54:53.056390   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0613 11:54:53.056416   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0613 11:54:53.056435   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0613 11:54:53.056456   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0613 11:54:53.056552   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 11:54:53.056618   23427 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 11:54:53.056630   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 11:54:53.056671   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 11:54:53.056702   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 11:54:53.056743   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 11:54:53.056814   23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 11:54:53.056848   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> /usr/share/ca-certificates/208002.pem
	I0613 11:54:53.056870   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0613 11:54:53.056888   23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem -> /usr/share/ca-certificates/20800.pem
	I0613 11:54:53.057381   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 11:54:53.080830   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0613 11:54:53.102837   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 11:54:53.124562   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0613 11:54:53.146528   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 11:54:53.168334   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 11:54:53.190413   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 11:54:53.212979   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 11:54:53.234813   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 11:54:53.256918   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 11:54:53.278971   23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 11:54:53.300956   23427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 11:54:53.317555   23427 ssh_runner.go:195] Run: openssl version
	I0613 11:54:53.323676   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 11:54:53.333510   23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 11:54:53.338094   23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 11:54:53.338149   23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 11:54:53.345327   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 11:54:53.355360   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 11:54:53.365162   23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 11:54:53.369494   23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 11:54:53.369545   23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 11:54:53.376916   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 11:54:53.386704   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 11:54:53.396479   23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 11:54:53.400962   23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 11:54:53.401010   23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 11:54:53.407999   23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 11:54:53.417690   23427 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 11:54:53.421910   23427 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0613 11:54:53.421960   23427 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:54:53.422053   23427 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 11:54:53.442681   23427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 11:54:53.451987   23427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 11:54:53.460922   23427 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 11:54:53.460975   23427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 11:54:53.469759   23427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 11:54:53.469788   23427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 11:54:53.520794   23427 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0613 11:54:53.520838   23427 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 11:54:53.771934   23427 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 11:54:53.772024   23427 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 11:54:53.772106   23427 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 11:54:53.958924   23427 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 11:54:53.959596   23427 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 11:54:53.959664   23427 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0613 11:54:54.034320   23427 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 11:54:54.078589   23427 out.go:204]   - Generating certificates and keys ...
	I0613 11:54:54.078678   23427 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 11:54:54.078765   23427 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 11:54:54.497170   23427 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0613 11:54:54.597518   23427 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0613 11:54:54.726338   23427 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0613 11:54:54.775230   23427 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0613 11:54:54.975184   23427 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0613 11:54:54.975295   23427 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0613 11:54:55.088341   23427 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0613 11:54:55.088469   23427 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0613 11:54:55.237143   23427 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0613 11:54:55.358543   23427 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0613 11:54:55.577993   23427 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0613 11:54:55.578064   23427 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 11:54:55.734667   23427 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 11:54:55.866241   23427 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 11:54:55.996140   23427 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 11:54:56.186237   23427 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 11:54:56.186642   23427 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 11:54:56.208044   23427 out.go:204]   - Booting up control plane ...
	I0613 11:54:56.208148   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 11:54:56.208239   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 11:54:56.208348   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 11:54:56.208449   23427 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 11:54:56.208616   23427 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 11:55:36.197751   23427 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 11:55:36.198481   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:55:36.198735   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:55:41.199865   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:55:41.200099   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:55:51.201726   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:55:51.201966   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:56:11.204293   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:56:11.204508   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:56:51.206695   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:56:51.206954   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:56:51.206974   23427 kubeadm.go:322] 
	I0613 11:56:51.207013   23427 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0613 11:56:51.207102   23427 kubeadm.go:322] 		timed out waiting for the condition
	I0613 11:56:51.207126   23427 kubeadm.go:322] 
	I0613 11:56:51.207163   23427 kubeadm.go:322] 	This error is likely caused by:
	I0613 11:56:51.207191   23427 kubeadm.go:322] 		- The kubelet is not running
	I0613 11:56:51.207308   23427 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 11:56:51.207319   23427 kubeadm.go:322] 
	I0613 11:56:51.207394   23427 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 11:56:51.207426   23427 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0613 11:56:51.207453   23427 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0613 11:56:51.207457   23427 kubeadm.go:322] 
	I0613 11:56:51.207553   23427 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 11:56:51.207639   23427 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0613 11:56:51.207651   23427 kubeadm.go:322] 
	I0613 11:56:51.207720   23427 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0613 11:56:51.207764   23427 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0613 11:56:51.207847   23427 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0613 11:56:51.207873   23427 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0613 11:56:51.207881   23427 kubeadm.go:322] 
	I0613 11:56:51.211036   23427 kubeadm.go:322] W0613 18:54:53.519522    1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0613 11:56:51.211195   23427 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 11:56:51.211260   23427 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 11:56:51.211367   23427 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0613 11:56:51.211459   23427 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 11:56:51.211558   23427 kubeadm.go:322] W0613 18:54:56.189875    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0613 11:56:51.211662   23427 kubeadm.go:322] W0613 18:54:56.190619    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0613 11:56:51.211728   23427 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 11:56:51.211785   23427 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0613 11:56:51.211889   23427 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:54:53.519522    1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:54:56.189875    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:54:56.190619    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:54:53.519522    1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:54:56.189875    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:54:56.190619    1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0613 11:56:51.211925   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0613 11:56:51.626878   23427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 11:56:51.638105   23427 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 11:56:51.638167   23427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 11:56:51.647614   23427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 11:56:51.647649   23427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 11:56:51.697880   23427 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0613 11:56:51.697932   23427 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 11:56:51.940031   23427 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 11:56:51.940107   23427 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 11:56:51.940183   23427 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 11:56:52.122676   23427 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 11:56:52.123288   23427 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 11:56:52.123321   23427 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0613 11:56:52.193353   23427 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 11:56:52.214985   23427 out.go:204]   - Generating certificates and keys ...
	I0613 11:56:52.215080   23427 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 11:56:52.215156   23427 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 11:56:52.215221   23427 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0613 11:56:52.215282   23427 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0613 11:56:52.215363   23427 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0613 11:56:52.215427   23427 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0613 11:56:52.215481   23427 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0613 11:56:52.215546   23427 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0613 11:56:52.215634   23427 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0613 11:56:52.215711   23427 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0613 11:56:52.215744   23427 kubeadm.go:322] [certs] Using the existing "sa" key
	I0613 11:56:52.215782   23427 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 11:56:52.326715   23427 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 11:56:52.394734   23427 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 11:56:52.743050   23427 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 11:56:52.868674   23427 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 11:56:52.869160   23427 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 11:56:52.891067   23427 out.go:204]   - Booting up control plane ...
	I0613 11:56:52.891277   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 11:56:52.891407   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 11:56:52.891520   23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 11:56:52.891679   23427 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 11:56:52.891956   23427 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 11:57:32.880050   23427 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 11:57:32.880882   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:57:32.881082   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:57:37.883269   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:57:37.883483   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:57:47.884825   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:57:47.885047   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:58:07.887411   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:58:07.887640   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:58:47.890480   23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 11:58:47.890881   23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 11:58:47.890896   23427 kubeadm.go:322] 
	I0613 11:58:47.890952   23427 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0613 11:58:47.891081   23427 kubeadm.go:322] 		timed out waiting for the condition
	I0613 11:58:47.891101   23427 kubeadm.go:322] 
	I0613 11:58:47.891181   23427 kubeadm.go:322] 	This error is likely caused by:
	I0613 11:58:47.891256   23427 kubeadm.go:322] 		- The kubelet is not running
	I0613 11:58:47.891487   23427 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 11:58:47.891509   23427 kubeadm.go:322] 
	I0613 11:58:47.891692   23427 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 11:58:47.891735   23427 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0613 11:58:47.891788   23427 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0613 11:58:47.891807   23427 kubeadm.go:322] 
	I0613 11:58:47.891920   23427 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 11:58:47.892006   23427 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0613 11:58:47.892014   23427 kubeadm.go:322] 
	I0613 11:58:47.892159   23427 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0613 11:58:47.892213   23427 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0613 11:58:47.892314   23427 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0613 11:58:47.892341   23427 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0613 11:58:47.892346   23427 kubeadm.go:322] 
	I0613 11:58:47.895480   23427 kubeadm.go:322] W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0613 11:58:47.895659   23427 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 11:58:47.895731   23427 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 11:58:47.895847   23427 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0613 11:58:47.895955   23427 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 11:58:47.896053   23427 kubeadm.go:322] W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0613 11:58:47.896147   23427 kubeadm.go:322] W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0613 11:58:47.896228   23427 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 11:58:47.896311   23427 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0613 11:58:47.896340   23427 kubeadm.go:406] StartCluster complete in 3m54.467352234s
	I0613 11:58:47.896444   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 11:58:47.916198   23427 logs.go:284] 0 containers: []
	W0613 11:58:47.916211   23427 logs.go:286] No container was found matching "kube-apiserver"
	I0613 11:58:47.916282   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 11:58:47.937301   23427 logs.go:284] 0 containers: []
	W0613 11:58:47.937316   23427 logs.go:286] No container was found matching "etcd"
	I0613 11:58:47.937401   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 11:58:47.957426   23427 logs.go:284] 0 containers: []
	W0613 11:58:47.957443   23427 logs.go:286] No container was found matching "coredns"
	I0613 11:58:47.957514   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 11:58:47.977779   23427 logs.go:284] 0 containers: []
	W0613 11:58:47.977792   23427 logs.go:286] No container was found matching "kube-scheduler"
	I0613 11:58:47.977863   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 11:58:47.998090   23427 logs.go:284] 0 containers: []
	W0613 11:58:47.998105   23427 logs.go:286] No container was found matching "kube-proxy"
	I0613 11:58:47.998169   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 11:58:48.018916   23427 logs.go:284] 0 containers: []
	W0613 11:58:48.018930   23427 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 11:58:48.019006   23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 11:58:48.039074   23427 logs.go:284] 0 containers: []
	W0613 11:58:48.039089   23427 logs.go:286] No container was found matching "kindnet"
	I0613 11:58:48.039096   23427 logs.go:123] Gathering logs for container status ...
	I0613 11:58:48.039104   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 11:58:48.093166   23427 logs.go:123] Gathering logs for kubelet ...
	I0613 11:58:48.093180   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 11:58:48.161218   23427 logs.go:123] Gathering logs for dmesg ...
	I0613 11:58:48.161235   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 11:58:48.176606   23427 logs.go:123] Gathering logs for describe nodes ...
	I0613 11:58:48.176621   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 11:58:48.233328   23427 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 11:58:48.233346   23427 logs.go:123] Gathering logs for Docker ...
	I0613 11:58:48.233353   23427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0613 11:58:48.249929   23427 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0613 11:58:48.249951   23427 out.go:239] * 
	* 
	W0613 11:58:48.249991   23427 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 11:58:48.250006   23427 out.go:239] * 
	* 
	W0613 11:58:48.250634   23427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 11:58:48.293328   23427 out.go:177] 
	W0613 11:58:48.356434   23427 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0613 18:56:51.696670    4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0613 18:56:52.872236    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0613 18:56:52.873018    4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 11:58:48.356523   23427 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0613 11:58:48.356562   23427 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0613 11:58:48.378470   23427 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (277.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (98.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-779000 addons enable ingress --alsologtostderr -v=5
E0613 11:59:26.225020   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-779000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m38.261871972s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 11:58:48.522180   23697 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:58:48.522360   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:58:48.522366   23697 out.go:309] Setting ErrFile to fd 2...
	I0613 11:58:48.522370   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:58:48.522481   23697 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 11:58:48.523076   23697 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0613 11:58:48.544624   23697 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0613 11:58:48.566523   23697 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0613 11:58:48.566558   23697 addons.go:66] Setting ingress=true in profile "ingress-addon-legacy-779000"
	I0613 11:58:48.566569   23697 addons.go:228] Setting addon ingress=true in "ingress-addon-legacy-779000"
	I0613 11:58:48.566650   23697 host.go:66] Checking if "ingress-addon-legacy-779000" exists ...
	I0613 11:58:48.567630   23697 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
	I0613 11:58:48.639239   23697 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0613 11:58:48.681072   23697 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0613 11:58:48.702266   23697 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0613 11:58:48.723277   23697 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0613 11:58:48.744744   23697 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0613 11:58:48.744788   23697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0613 11:58:48.744951   23697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 11:58:48.795577   23697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 11:58:48.892916   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:48.947519   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:48.947545   23697 retry.go:31] will retry after 178.158582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.126279   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:49.182164   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.182187   23697 retry.go:31] will retry after 353.970678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.536719   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:49.592046   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.592062   23697 retry.go:31] will retry after 341.43683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.933892   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:49.992349   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:49.992371   23697 retry.go:31] will retry after 652.898893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:50.647533   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:50.706547   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:50.706566   23697 retry.go:31] will retry after 739.934307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:51.448918   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:51.505316   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:51.505335   23697 retry.go:31] will retry after 2.842551844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:54.349664   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:54.408417   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:54.408438   23697 retry.go:31] will retry after 4.219934339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:58.629467   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:58:58.685137   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:58:58.685155   23697 retry.go:31] will retry after 3.314150676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:02.001098   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:59:02.057883   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:02.057904   23697 retry.go:31] will retry after 6.834359542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:08.894753   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:59:08.953024   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:08.953046   23697 retry.go:31] will retry after 5.977621838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:14.931540   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:59:14.992033   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:14.992061   23697 retry.go:31] will retry after 9.009312893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:24.002491   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:59:24.059301   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:24.059323   23697 retry.go:31] will retry after 26.159501523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:50.222065   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 11:59:50.278311   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 11:59:50.278332   23697 retry.go:31] will retry after 36.294568958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:26.574282   23697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0613 12:00:26.629263   23697 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:26.629300   23697 addons.go:464] Verifying addon ingress=true in "ingress-addon-legacy-779000"
	I0613 12:00:26.650989   23697 out.go:177] * Verifying ingress addon...
	I0613 12:00:26.673664   23697 out.go:177] 
	W0613 12:00:26.694907   23697 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-779000" does not exist: client config: context "ingress-addon-legacy-779000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-779000" does not exist: client config: context "ingress-addon-legacy-779000" does not exist]
	W0613 12:00:26.694937   23697 out.go:239] * 
	* 
	W0613 12:00:26.701332   23697 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:00:26.722786   23697 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-779000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-779000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab",
	        "Created": "2023-06-13T18:54:35.291584304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T18:54:35.516734211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hostname",
	        "HostsPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hosts",
	        "LogPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab-json.log",
	        "Name": "/ingress-addon-legacy-779000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-779000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-779000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-779000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-779000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-779000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c54883ce7a545098476d92559c416489908d64b14f6849edeb48e684e795f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56371"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56372"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56373"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56374"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93c54883ce7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-779000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48876ed6a0a8",
	                        "ingress-addon-legacy-779000"
	                    ],
	                    "NetworkID": "173f00e57e7a444b0899bed49564e1b7e15e64cbb4cdbac60491a617429e2d97",
	                    "EndpointID": "a12cc5202c8f09b6708f4be073fa49b7851f680912e2913e197c91f71a28c45e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000: exit status 6 (353.914799ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:00:27.142090   23728 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (98.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (90.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-779000 addons enable ingress-dns --alsologtostderr -v=5
E0613 12:01:38.944972   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:01:42.375837   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-779000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m30.039409797s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:00:27.194613   23738 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:00:27.194869   23738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:00:27.194874   23738 out.go:309] Setting ErrFile to fd 2...
	I0613 12:00:27.194879   23738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:00:27.194996   23738 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:00:27.195609   23738 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0613 12:00:27.217402   23738 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0613 12:00:27.239278   23738 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0613 12:00:27.239308   23738 addons.go:66] Setting ingress-dns=true in profile "ingress-addon-legacy-779000"
	I0613 12:00:27.239320   23738 addons.go:228] Setting addon ingress-dns=true in "ingress-addon-legacy-779000"
	I0613 12:00:27.239391   23738 host.go:66] Checking if "ingress-addon-legacy-779000" exists ...
	I0613 12:00:27.240392   23738 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
	I0613 12:00:27.312149   23738 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0613 12:00:27.334135   23738 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0613 12:00:27.355178   23738 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0613 12:00:27.355219   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0613 12:00:27.355395   23738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
	I0613 12:00:27.406525   23738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
	I0613 12:00:27.501943   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:27.556129   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:27.556152   23738 retry.go:31] will retry after 248.152226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:27.804606   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:27.859917   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:27.859941   23738 retry.go:31] will retry after 373.585613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:28.235777   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:28.294429   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:28.294451   23738 retry.go:31] will retry after 632.098997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:28.927038   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:28.986463   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:28.986485   23738 retry.go:31] will retry after 432.44865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:29.419184   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:29.475665   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:29.475691   23738 retry.go:31] will retry after 1.46157065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:30.937950   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:30.994573   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:30.994593   23738 retry.go:31] will retry after 1.305436493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:32.300717   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:32.357127   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:32.357146   23738 retry.go:31] will retry after 2.937438389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:35.294993   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:35.349962   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:35.349988   23738 retry.go:31] will retry after 3.719816521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:39.070235   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:39.137022   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:39.137042   23738 retry.go:31] will retry after 6.926357797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:46.064384   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:46.122367   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:46.122385   23738 retry.go:31] will retry after 5.777800471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:51.900566   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:00:51.957506   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:00:51.957525   23738 retry.go:31] will retry after 16.733763413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:08.693947   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:01:08.752170   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:08.752187   23738 retry.go:31] will retry after 25.964519751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:34.719909   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:01:34.776465   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:34.776484   23738 retry.go:31] will retry after 22.261866916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:57.039338   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0613 12:01:57.096934   23738 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0613 12:01:57.125407   23738 out.go:177] 
	W0613 12:01:57.146375   23738 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0613 12:01:57.146413   23738 out.go:239] * 
	* 
	W0613 12:01:57.152576   23738 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:01:57.173259   23738 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-779000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-779000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab",
	        "Created": "2023-06-13T18:54:35.291584304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T18:54:35.516734211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hostname",
	        "HostsPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hosts",
	        "LogPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab-json.log",
	        "Name": "/ingress-addon-legacy-779000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-779000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-779000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-779000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-779000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-779000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c54883ce7a545098476d92559c416489908d64b14f6849edeb48e684e795f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56371"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56372"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56373"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56374"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93c54883ce7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-779000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48876ed6a0a8",
	                        "ingress-addon-legacy-779000"
	                    ],
	                    "NetworkID": "173f00e57e7a444b0899bed49564e1b7e15e64cbb4cdbac60491a617429e2d97",
	                    "EndpointID": "a12cc5202c8f09b6708f4be073fa49b7851f680912e2913e197c91f71a28c45e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000: exit status 6 (354.810343ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:01:57.592277   23767 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (90.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-779000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-779000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab",
	        "Created": "2023-06-13T18:54:35.291584304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T18:54:35.516734211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hostname",
	        "HostsPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/hosts",
	        "LogPath": "/var/lib/docker/containers/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab/48876ed6a0a882ca79162add9281fc200853f0a9e9d7ee9eba0cb6a1f2b0efab-json.log",
	        "Name": "/ingress-addon-legacy-779000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-779000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-779000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48ef1ebde17c8e80f38ffa7992e6dded27f46f385e5007eea76d436fc74763a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-779000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-779000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-779000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-779000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c54883ce7a545098476d92559c416489908d64b14f6849edeb48e684e795f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56371"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56372"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56373"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56374"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93c54883ce7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-779000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48876ed6a0a8",
	                        "ingress-addon-legacy-779000"
	                    ],
	                    "NetworkID": "173f00e57e7a444b0899bed49564e1b7e15e64cbb4cdbac60491a617429e2d97",
	                    "EndpointID": "a12cc5202c8f09b6708f4be073fa49b7851f680912e2913e197c91f71a28c45e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-779000 -n ingress-addon-legacy-779000: exit status 6 (357.140208ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:01:58.000210   23779 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (1m6.699443857s)

                                                
                                                
-- stdout --
	! [running-upgrade-426000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig39380433
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:23:58.194841803 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-426000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:24:16.952585911 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-426000", then "minikube start -p running-upgrade-426000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.83 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.47 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.73 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.79 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.68 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.71 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 75.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.18 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 124.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 158.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.96 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.54 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.85 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 206.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.49 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 253.07 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.65 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 296.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 301.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.18 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 330.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 335.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.35 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.79 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 372.10 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.40 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.57 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 440.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 449.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 464.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 507.60 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.35 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:24:16.952585911 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (4.094126109s)

                                                
                                                
-- stdout --
	* [running-upgrade-426000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig4126013159
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-426000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.441644448.exe start -p running-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (4.149085633s)

                                                
                                                
-- stdout --
	* [running-upgrade-426000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3513360043
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-426000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:138: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-13 12:24:30.790279 -0700 PDT m=+2537.115072254
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-426000
helpers_test.go:235: (dbg) docker inspect running-upgrade-426000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb",
	        "Created": "2023-06-13T19:24:06.260620943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 579459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:24:06.455372636Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb/hosts",
	        "LogPath": "/var/lib/docker/containers/d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb/d922aa9cbd60b8845f7421041bc0b850d8ca20d28be610854c61aa81ac8e5beb-json.log",
	        "Name": "/running-upgrade-426000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-426000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ce9aae8e001271ea7785e9fee914f2648d63aae1cb782f273ceb92a950ffd3c8-init/diff:/var/lib/docker/overlay2/bd5d4e5d6e29c152b90cc5ee62014be7bf0ed8e72fbfc9c7f9d15907f6937366/diff:/var/lib/docker/overlay2/629ea90b7463c44281af8facf94b99babf8e2f1a8bbba249ecf4d4a38c077053/diff:/var/lib/docker/overlay2/43a89669bb434d69c16e207115316605e5581a673d267bec603763ca10ae7860/diff:/var/lib/docker/overlay2/6229c8a21fa06566af80ac84eed7dfcfac77aad05af2760837e2fa4f38f3bb81/diff:/var/lib/docker/overlay2/5fe59bf4fca86d2dd693e1b57f40200c9eae3e6af67c52316a9fa227a4efecaa/diff:/var/lib/docker/overlay2/670330be30ea6e867aceedf881c6c81989187a97bfe74bbce21c19d44bbc94c9/diff:/var/lib/docker/overlay2/ae9e860167c87dfae15a19e81c9107ff5c96a3784daedb66b95adbbdaba7c25e/diff:/var/lib/docker/overlay2/bb5e1f22d8511b73f8231e723aefbb454a251d1f53feab386772e2e19a240058/diff:/var/lib/docker/overlay2/0a5910f81daa90fe43ce920e2d6ccba890d3672d2235b8b877238f7f829d500b/diff:/var/lib/docker/overlay2/d33235
242748f221d8d97731d76bb2c1aaadcad7be0c63d71f03c420cf5eb37d/diff:/var/lib/docker/overlay2/979a9678f96c73c005ec310abc94c968661a127a12b9eba26ceb218f0f662dce/diff:/var/lib/docker/overlay2/d41e71ca29e1184a624bbaf7a17ca27724209e175998e98d0d17fde6000b371d/diff:/var/lib/docker/overlay2/4b4aaf81bb876aa687125d1b2894767b67f08af2502a14b474ae85ef0fe63b69/diff:/var/lib/docker/overlay2/71b4d602da9337e8077972fff4a79248039c9c69d753d7f0108b872b732610f6/diff:/var/lib/docker/overlay2/79708989956ebd16e975d67910844b03d5c881441f813727f7489eda6c264df1/diff:/var/lib/docker/overlay2/1e31811a33ddb038a79f67fe4eaf9df0bab36984ad6295a3274a06abbb3c7cb4/diff:/var/lib/docker/overlay2/8f20a1e9b92d450879b34af4439556841635e88546372c652c4dd0b0779d874e/diff:/var/lib/docker/overlay2/d2d7dda6a90274cf2aed78112a265a069871fa702a8f5cfe89c62fcdbb532975/diff:/var/lib/docker/overlay2/111cadc0bbbcfe2d59657a70bd899942e4652188868b70c5968af9e77f99be2f/diff:/var/lib/docker/overlay2/de200cb230ab4e7d17c2e0cce405051fa7aab9233e9316629237ed9dff7a36ba/diff:/var/lib/d
ocker/overlay2/f7e359c04e5c9655c68543b182a5e47cf9a29012e1a8be825737c6fe57e7d3d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce9aae8e001271ea7785e9fee914f2648d63aae1cb782f273ceb92a950ffd3c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce9aae8e001271ea7785e9fee914f2648d63aae1cb782f273ceb92a950ffd3c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce9aae8e001271ea7785e9fee914f2648d63aae1cb782f273ceb92a950ffd3c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-426000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-426000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-426000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-426000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-426000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85879f8e0bfc9781ddc51058e136b5e9c233f076c4c58bc25992df06a5c9a390",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57682"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57681"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/85879f8e0bfc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "3d9098c5b9629e28d6b090801cd63fd8ee6c78ae048f36baf2c4141f10bda39c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f92755f08c7bc9f387e31ff696d704be3f498e82a4b89101efe28d5f4f3be670",
	                    "EndpointID": "3d9098c5b9629e28d6b090801cd63fd8ee6c78ae048f36baf2c4141f10bda39c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-426000 -n running-upgrade-426000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-426000 -n running-upgrade-426000: exit status 6 (350.395211ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:24:31.180730   29805 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-426000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-426000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-426000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-426000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-426000: (2.269414611s)
--- FAIL: TestRunningBinaryUpgrade (84.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (574.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0613 12:25:40.003954   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m19.156765728s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-660000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-660000 in cluster kubernetes-upgrade-660000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:25:36.372632   30158 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:25:36.372838   30158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:25:36.372843   30158 out.go:309] Setting ErrFile to fd 2...
	I0613 12:25:36.372847   30158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:25:36.372982   30158 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:25:36.374475   30158 out.go:303] Setting JSON to false
	I0613 12:25:36.393721   30158 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8707,"bootTime":1686675629,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:25:36.393816   30158 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:25:36.415554   30158 out.go:177] * [kubernetes-upgrade-660000] minikube v1.30.1 on Darwin 13.4
	I0613 12:25:36.457578   30158 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:25:36.457610   30158 notify.go:220] Checking for updates...
	I0613 12:25:36.500235   30158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:25:36.521457   30158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:25:36.542308   30158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:25:36.563329   30158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:25:36.584412   30158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:25:36.605780   30158 config.go:182] Loaded profile config "cert-expiration-367000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:25:36.605960   30158 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:25:36.662327   30158 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:25:36.662472   30158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:25:36.753378   30158 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:25:36.74313957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path
:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<n
il>}}
	I0613 12:25:36.796788   30158 out.go:177] * Using the docker driver based on user configuration
	I0613 12:25:36.817765   30158 start.go:297] selected driver: docker
	I0613 12:25:36.817786   30158 start.go:884] validating driver "docker" against <nil>
	I0613 12:25:36.817803   30158 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:25:36.821836   30158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:25:36.914721   30158 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:25:36.904661441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:25:36.914930   30158 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0613 12:25:36.915119   30158 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0613 12:25:36.935682   30158 out.go:177] * Using Docker Desktop driver with root privileges
	I0613 12:25:36.956725   30158 cni.go:84] Creating CNI manager for ""
	I0613 12:25:36.956790   30158 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:25:36.956809   30158 start_flags.go:319] config:
	{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:25:37.000755   30158 out.go:177] * Starting control plane node kubernetes-upgrade-660000 in cluster kubernetes-upgrade-660000
	I0613 12:25:37.021568   30158 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 12:25:37.042803   30158 out.go:177] * Pulling base image ...
	I0613 12:25:37.084840   30158 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 12:25:37.084851   30158 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:25:37.085002   30158 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0613 12:25:37.085027   30158 cache.go:57] Caching tarball of preloaded images
	I0613 12:25:37.085228   30158 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 12:25:37.085243   30158 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0613 12:25:37.086245   30158 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/config.json ...
	I0613 12:25:37.086359   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/config.json: {Name:mkad98386a772c6b29f825d4e9a6a8e99b5e3e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:37.136098   30158 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 12:25:37.136119   30158 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 12:25:37.136135   30158 cache.go:195] Successfully downloaded all kic artifacts
	I0613 12:25:37.136167   30158 start.go:365] acquiring machines lock for kubernetes-upgrade-660000: {Name:mk952feff60a3e0d983b47946508aa79d68dd1c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 12:25:37.136323   30158 start.go:369] acquired machines lock for "kubernetes-upgrade-660000" in 144.027µs
	I0613 12:25:37.136347   30158 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-660000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 12:25:37.136412   30158 start.go:125] createHost starting for "" (driver="docker")
	I0613 12:25:37.158101   30158 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0613 12:25:37.158464   30158 start.go:159] libmachine.API.Create for "kubernetes-upgrade-660000" (driver="docker")
	I0613 12:25:37.158517   30158 client.go:168] LocalClient.Create starting
	I0613 12:25:37.158729   30158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem
	I0613 12:25:37.158811   30158 main.go:141] libmachine: Decoding PEM data...
	I0613 12:25:37.158847   30158 main.go:141] libmachine: Parsing certificate...
	I0613 12:25:37.158953   30158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem
	I0613 12:25:37.159009   30158 main.go:141] libmachine: Decoding PEM data...
	I0613 12:25:37.159026   30158 main.go:141] libmachine: Parsing certificate...
	I0613 12:25:37.179435   30158 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-660000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0613 12:25:37.229008   30158 cli_runner.go:211] docker network inspect kubernetes-upgrade-660000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0613 12:25:37.229114   30158 network_create.go:281] running [docker network inspect kubernetes-upgrade-660000] to gather additional debugging logs...
	I0613 12:25:37.229131   30158 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-660000
	W0613 12:25:37.277064   30158 cli_runner.go:211] docker network inspect kubernetes-upgrade-660000 returned with exit code 1
	I0613 12:25:37.277093   30158 network_create.go:284] error running [docker network inspect kubernetes-upgrade-660000]: docker network inspect kubernetes-upgrade-660000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-660000 not found
	I0613 12:25:37.277110   30158 network_create.go:286] output of [docker network inspect kubernetes-upgrade-660000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-660000 not found
	
	** /stderr **
	I0613 12:25:37.277200   30158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0613 12:25:37.327854   30158 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0613 12:25:37.328176   30158 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008b8e90}
	I0613 12:25:37.328187   30158 network_create.go:123] attempt to create docker network kubernetes-upgrade-660000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0613 12:25:37.328256   30158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 kubernetes-upgrade-660000
	W0613 12:25:37.377453   30158 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 kubernetes-upgrade-660000 returned with exit code 1
	W0613 12:25:37.377485   30158 network_create.go:148] failed to create docker network kubernetes-upgrade-660000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 kubernetes-upgrade-660000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0613 12:25:37.377504   30158 network_create.go:115] failed to create docker network kubernetes-upgrade-660000 192.168.58.0/24, will retry: subnet is taken
	I0613 12:25:37.378854   30158 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0613 12:25:37.379177   30158 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e76d30}
	I0613 12:25:37.379190   30158 network_create.go:123] attempt to create docker network kubernetes-upgrade-660000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0613 12:25:37.379264   30158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 kubernetes-upgrade-660000
	I0613 12:25:37.465048   30158 network_create.go:107] docker network kubernetes-upgrade-660000 192.168.67.0/24 created
	I0613 12:25:37.465090   30158 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-660000" container
	I0613 12:25:37.465220   30158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0613 12:25:37.514461   30158 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-660000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 --label created_by.minikube.sigs.k8s.io=true
	I0613 12:25:37.564361   30158 oci.go:103] Successfully created a docker volume kubernetes-upgrade-660000
	I0613 12:25:37.564486   30158 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-660000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 --entrypoint /usr/bin/test -v kubernetes-upgrade-660000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0613 12:25:38.020715   30158 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-660000
	I0613 12:25:38.020750   30158 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:25:38.020763   30158 kic.go:190] Starting extracting preloaded images to volume ...
	I0613 12:25:38.020915   30158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-660000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0613 12:25:43.395486   30158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-660000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.374595604s)
	I0613 12:25:43.395513   30158 kic.go:199] duration metric: took 5.374869 seconds to extract preloaded images to volume
	I0613 12:25:43.395649   30158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0613 12:25:43.492903   30158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-660000 --name kubernetes-upgrade-660000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-660000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-660000 --network kubernetes-upgrade-660000 --ip 192.168.67.2 --volume kubernetes-upgrade-660000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0613 12:25:43.771121   30158 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Running}}
	I0613 12:25:43.823434   30158 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:25:43.880848   30158 cli_runner.go:164] Run: docker exec kubernetes-upgrade-660000 stat /var/lib/dpkg/alternatives/iptables
	I0613 12:25:43.982162   30158 oci.go:144] the created container "kubernetes-upgrade-660000" has a running status.
	I0613 12:25:43.982200   30158 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa...
	I0613 12:25:44.016066   30158 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0613 12:25:44.087513   30158 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:25:44.142806   30158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0613 12:25:44.142863   30158 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-660000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0613 12:25:44.244702   30158 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:25:44.299949   30158 machine.go:88] provisioning docker machine ...
	I0613 12:25:44.300010   30158 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-660000"
	I0613 12:25:44.300131   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:44.352314   30158 main.go:141] libmachine: Using SSH client type: native
	I0613 12:25:44.352696   30158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 57793 <nil> <nil>}
	I0613 12:25:44.352711   30158 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-660000 && echo "kubernetes-upgrade-660000" | sudo tee /etc/hostname
	I0613 12:25:44.483115   30158 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-660000
	
	I0613 12:25:44.483226   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:44.533597   30158 main.go:141] libmachine: Using SSH client type: native
	I0613 12:25:44.533990   30158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 57793 <nil> <nil>}
	I0613 12:25:44.534005   30158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-660000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-660000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-660000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 12:25:44.650101   30158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:25:44.650124   30158 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 12:25:44.650142   30158 ubuntu.go:177] setting up certificates
	I0613 12:25:44.650149   30158 provision.go:83] configureAuth start
	I0613 12:25:44.650231   30158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-660000
	I0613 12:25:44.700511   30158 provision.go:138] copyHostCerts
	I0613 12:25:44.700615   30158 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 12:25:44.700625   30158 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 12:25:44.700752   30158 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 12:25:44.700972   30158 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 12:25:44.700978   30158 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 12:25:44.701046   30158 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 12:25:44.701212   30158 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 12:25:44.701218   30158 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 12:25:44.701286   30158 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 12:25:44.701421   30158 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-660000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-660000]
	I0613 12:25:44.807522   30158 provision.go:172] copyRemoteCerts
	I0613 12:25:44.807599   30158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 12:25:44.807699   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:44.857570   30158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57793 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:25:44.944418   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0613 12:25:44.966704   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 12:25:44.989621   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0613 12:25:45.011579   30158 provision.go:86] duration metric: configureAuth took 361.425529ms
	I0613 12:25:45.011593   30158 ubuntu.go:193] setting minikube options for container-runtime
	I0613 12:25:45.011744   30158 config.go:182] Loaded profile config "kubernetes-upgrade-660000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0613 12:25:45.011804   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:45.061360   30158 main.go:141] libmachine: Using SSH client type: native
	I0613 12:25:45.061708   30158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 57793 <nil> <nil>}
	I0613 12:25:45.061723   30158 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 12:25:45.180109   30158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 12:25:45.180132   30158 ubuntu.go:71] root file system type: overlay
	I0613 12:25:45.180222   30158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 12:25:45.180303   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:45.230334   30158 main.go:141] libmachine: Using SSH client type: native
	I0613 12:25:45.230684   30158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 57793 <nil> <nil>}
	I0613 12:25:45.230733   30158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 12:25:45.357504   30158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 12:25:45.357602   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:45.407816   30158 main.go:141] libmachine: Using SSH client type: native
	I0613 12:25:45.408166   30158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 57793 <nil> <nil>}
	I0613 12:25:45.408182   30158 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 12:25:46.072757   30158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-05-25 21:51:00.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:25:45.355130926 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0613 12:25:46.072785   30158 machine.go:91] provisioned docker machine in 1.772853582s
	I0613 12:25:46.072794   30158 client.go:171] LocalClient.Create took 8.914472632s
	I0613 12:25:46.072812   30158 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-660000" took 8.914554149s
	I0613 12:25:46.072829   30158 start.go:300] post-start starting for "kubernetes-upgrade-660000" (driver="docker")
	I0613 12:25:46.072839   30158 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 12:25:46.072916   30158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 12:25:46.072987   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:46.122574   30158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57793 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:25:46.210645   30158 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 12:25:46.214832   30158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 12:25:46.214857   30158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 12:25:46.214865   30158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 12:25:46.214872   30158 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 12:25:46.214881   30158 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 12:25:46.214978   30158 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 12:25:46.215170   30158 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 12:25:46.215360   30158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 12:25:46.224196   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:25:46.246226   30158 start.go:303] post-start completed in 173.391547ms
	I0613 12:25:46.246738   30158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-660000
	I0613 12:25:46.296582   30158 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/config.json ...
	I0613 12:25:46.297023   30158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:25:46.297084   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:46.345920   30158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57793 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:25:46.430028   30158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 12:25:46.435481   30158 start.go:128] duration metric: createHost completed in 9.299265858s
	I0613 12:25:46.435501   30158 start.go:83] releasing machines lock for "kubernetes-upgrade-660000", held for 9.299381165s
	I0613 12:25:46.435584   30158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-660000
	I0613 12:25:46.485101   30158 ssh_runner.go:195] Run: cat /version.json
	I0613 12:25:46.485145   30158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 12:25:46.485177   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:46.485226   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:46.537069   30158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57793 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:25:46.537070   30158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57793 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:25:46.726207   30158 ssh_runner.go:195] Run: systemctl --version
	I0613 12:25:46.731692   30158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 12:25:46.737108   30158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 12:25:46.760349   30158 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 12:25:46.760421   30158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0613 12:25:46.776790   30158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0613 12:25:46.792849   30158 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0613 12:25:46.792872   30158 start.go:464] detecting cgroup driver to use...
	I0613 12:25:46.792888   30158 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:25:46.793011   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:25:46.809267   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0613 12:25:46.819216   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 12:25:46.829243   30158 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 12:25:46.829303   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 12:25:46.839608   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:25:46.849675   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 12:25:46.859846   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:25:46.870060   30158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 12:25:46.879584   30158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 12:25:46.889718   30158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 12:25:46.898498   30158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 12:25:46.907021   30158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:25:46.984140   30158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 12:25:47.065001   30158 start.go:464] detecting cgroup driver to use...
	I0613 12:25:47.065024   30158 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:25:47.065091   30158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 12:25:47.077901   30158 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 12:25:47.077975   30158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 12:25:47.089488   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:25:47.106785   30158 ssh_runner.go:195] Run: which cri-dockerd
	I0613 12:25:47.111625   30158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 12:25:47.131230   30158 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 12:25:47.151767   30158 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 12:25:47.245126   30158 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 12:25:47.338061   30158 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 12:25:47.338078   30158 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 12:25:47.355955   30158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:25:47.423101   30158 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:25:47.668654   30158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:25:47.699411   30158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:25:47.751721   30158 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	I0613 12:25:47.751860   30158 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-660000 dig +short host.docker.internal
	I0613 12:25:47.861191   30158 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 12:25:47.861321   30158 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 12:25:47.866250   30158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:25:47.877598   30158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:25:47.927118   30158 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:25:47.927206   30158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:25:47.949923   30158 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:25:47.949945   30158 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:25:47.950009   30158 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:25:47.959390   30158 ssh_runner.go:195] Run: which lz4
	I0613 12:25:47.963673   30158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0613 12:25:47.968029   30158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0613 12:25:47.968063   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0613 12:25:52.787008   30158 docker.go:600] Took 4.823528 seconds to copy over tarball
	I0613 12:25:52.787093   30158 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0613 12:25:55.043800   30158 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256742725s)
	I0613 12:25:55.043814   30158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0613 12:25:55.112473   30158 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:25:55.122801   30158 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0613 12:25:55.139791   30158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:25:55.210274   30158 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:25:55.975324   30158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:25:55.996253   30158 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:25:55.996271   30158 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:25:55.996280   30158 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0613 12:25:56.003516   30158 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:25:56.003537   30158 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0613 12:25:56.003579   30158 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:25:56.003516   30158 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0613 12:25:56.003516   30158 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:25:56.003738   30158 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:25:56.003740   30158 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:25:56.003864   30158 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:25:56.010038   30158 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:25:56.010376   30158 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:25:56.012025   30158 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0613 12:25:56.012103   30158 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:25:56.012399   30158 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:25:56.012504   30158 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0613 12:25:56.012761   30158 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:25:56.013319   30158 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:25:57.103697   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:25:57.549555   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:25:57.572465   30158 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0613 12:25:57.572510   30158 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:25:57.572569   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:25:57.595936   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0613 12:25:57.596379   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0613 12:25:57.617823   30158 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0613 12:25:57.617855   30158 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0613 12:25:57.617910   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0613 12:25:57.640914   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0613 12:25:57.829475   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0613 12:25:57.843731   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:25:57.855430   30158 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0613 12:25:57.855462   30158 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:25:57.855520   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0613 12:25:57.869697   30158 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0613 12:25:57.869724   30158 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:25:57.869831   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:25:57.880458   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0613 12:25:57.892755   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0613 12:25:58.062735   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0613 12:25:58.085429   30158 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0613 12:25:58.085456   30158 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0613 12:25:58.085515   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0613 12:25:58.108424   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0613 12:25:58.441076   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:25:58.463111   30158 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0613 12:25:58.463139   30158 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:25:58.463205   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:25:58.483849   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0613 12:25:58.671657   30158 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:25:58.694957   30158 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0613 12:25:58.694982   30158 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:25:58.695050   30158 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:25:58.715771   30158 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0613 12:25:58.715833   30158 cache_images.go:92] LoadImages completed in 2.719605223s
	W0613 12:25:58.715884   30158 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0613 12:25:58.715957   30158 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 12:25:58.767306   30158 cni.go:84] Creating CNI manager for ""
	I0613 12:25:58.767323   30158 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:25:58.767336   30158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 12:25:58.767353   30158 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-660000 NodeName:kubernetes-upgrade-660000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0613 12:25:58.767454   30158 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-660000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-660000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 12:25:58.767522   30158 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-660000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 12:25:58.767588   30158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0613 12:25:58.776843   30158 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 12:25:58.776907   30158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 12:25:58.786164   30158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0613 12:25:58.803066   30158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 12:25:58.819632   30158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0613 12:25:58.836515   30158 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0613 12:25:58.841104   30158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:25:58.852455   30158 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000 for IP: 192.168.67.2
	I0613 12:25:58.852472   30158 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:58.852661   30158 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 12:25:58.852724   30158 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 12:25:58.852767   30158 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key
	I0613 12:25:58.852783   30158 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt with IP's: []
	I0613 12:25:58.983838   30158 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt ...
	I0613 12:25:58.983852   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt: {Name:mk3758ae86fc3006224522cbe68860e3f3681a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:58.984183   30158 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key ...
	I0613 12:25:58.984191   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key: {Name:mk2efa0c8ffb215fb2c6994cbf8e5c3c62417007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:58.984409   30158 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key.c7fa3a9e
	I0613 12:25:58.984424   30158 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0613 12:25:59.451809   30158 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt.c7fa3a9e ...
	I0613 12:25:59.451828   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt.c7fa3a9e: {Name:mk3d160e0310e4252e1fced2aaf1ae1b60371e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:59.452130   30158 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key.c7fa3a9e ...
	I0613 12:25:59.452139   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key.c7fa3a9e: {Name:mka3759bcd3f35c4e2fed9a5efbf917e29a04856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:59.452323   30158 certs.go:337] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt
	I0613 12:25:59.452476   30158 certs.go:341] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key
	I0613 12:25:59.452615   30158 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key
	I0613 12:25:59.452628   30158 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.crt with IP's: []
	I0613 12:25:59.486996   30158 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.crt ...
	I0613 12:25:59.487006   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.crt: {Name:mk61ea499b766a6ebe7b64462fa06398d5d3a9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:59.487234   30158 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key ...
	I0613 12:25:59.487242   30158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key: {Name:mkb0f9a4f2e10ec7fb92e284ab5c9c2569ec7911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:25:59.487643   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 12:25:59.487694   30158 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 12:25:59.487708   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 12:25:59.487740   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 12:25:59.487772   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 12:25:59.487809   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 12:25:59.487880   30158 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:25:59.488447   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 12:25:59.511720   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0613 12:25:59.534441   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 12:25:59.556698   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0613 12:25:59.578803   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 12:25:59.601177   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 12:25:59.623412   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 12:25:59.647717   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 12:25:59.671861   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 12:25:59.696064   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 12:25:59.718487   30158 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 12:25:59.740639   30158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 12:25:59.758018   30158 ssh_runner.go:195] Run: openssl version
	I0613 12:25:59.764135   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 12:25:59.774231   30158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:25:59.779214   30158 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:25:59.779283   30158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:25:59.786503   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 12:25:59.796948   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 12:25:59.807005   30158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 12:25:59.811744   30158 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 12:25:59.811797   30158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 12:25:59.819192   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 12:25:59.829163   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 12:25:59.839215   30158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 12:25:59.843794   30158 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 12:25:59.843842   30158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 12:25:59.850996   30158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 12:25:59.861008   30158 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 12:25:59.865798   30158 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0613 12:25:59.865844   30158 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0613 12:25:59.865950   30158 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:25:59.888320   30158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 12:25:59.897663   30158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 12:25:59.907104   30158 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:25:59.907161   30158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:25:59.916964   30158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:25:59.917002   30158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:25:59.972713   30158 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:25:59.972765   30158 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:26:00.261491   30158 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:26:00.261581   30158 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:26:00.261668   30158 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:26:00.460572   30158 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:26:00.461261   30158 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:26:00.467778   30158 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:26:00.537558   30158 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:26:00.579834   30158 out.go:204]   - Generating certificates and keys ...
	I0613 12:26:00.579996   30158 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:26:00.580115   30158 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:26:00.684120   30158 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0613 12:26:00.942460   30158 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0613 12:26:01.101609   30158 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0613 12:26:01.220047   30158 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0613 12:26:01.271843   30158 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0613 12:26:01.271963   30158 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0613 12:26:01.457895   30158 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0613 12:26:01.458252   30158 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0613 12:26:01.768310   30158 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0613 12:26:01.947695   30158 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0613 12:26:02.077114   30158 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0613 12:26:02.077235   30158 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:26:02.277576   30158 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:26:02.538927   30158 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:26:02.907464   30158 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:26:03.078169   30158 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:26:03.079039   30158 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:26:03.103395   30158 out.go:204]   - Booting up control plane ...
	I0613 12:26:03.103562   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:26:03.103712   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:26:03.103849   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:26:03.103983   30158 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:26:03.104235   30158 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:26:43.087874   30158 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:26:43.088365   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:26:43.088507   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:26:48.089552   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:26:48.089744   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:26:58.091210   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:26:58.091426   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:27:18.093088   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:27:18.093312   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:27:58.093824   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:27:58.094030   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:27:58.094045   30158 kubeadm.go:322] 
	I0613 12:27:58.094082   30158 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:27:58.094119   30158 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:27:58.094128   30158 kubeadm.go:322] 
	I0613 12:27:58.094160   30158 kubeadm.go:322] This error is likely caused by:
	I0613 12:27:58.094213   30158 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:27:58.094376   30158 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:27:58.094388   30158 kubeadm.go:322] 
	I0613 12:27:58.094486   30158 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:27:58.094517   30158 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:27:58.094549   30158 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:27:58.094554   30158 kubeadm.go:322] 
	I0613 12:27:58.094747   30158 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:27:58.094851   30158 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:27:58.094951   30158 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:27:58.095016   30158 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:27:58.095107   30158 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:27:58.095145   30158 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:27:58.097859   30158 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:27:58.097931   30158 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:27:58.098035   30158 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:27:58.098117   30158 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:27:58.098192   30158 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:27:58.098262   30158 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0613 12:27:58.098339   30158 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-660000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0613 12:27:58.098373   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0613 12:27:58.540807   30158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:27:58.551810   30158 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:27:58.551866   30158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:27:58.560751   30158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:27:58.560775   30158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:27:58.726138   30158 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:27:58.726241   30158 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:27:58.781075   30158 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:27:58.863079   30158 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:29:54.678940   30158 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:29:54.679077   30158 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0613 12:29:54.683131   30158 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:29:54.683180   30158 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:29:54.683240   30158 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:29:54.683323   30158 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:29:54.683416   30158 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:29:54.683493   30158 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:29:54.683588   30158 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:29:54.683646   30158 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:29:54.683692   30158 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:29:54.707054   30158 out.go:204]   - Generating certificates and keys ...
	I0613 12:29:54.707247   30158 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:29:54.707365   30158 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:29:54.707489   30158 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0613 12:29:54.707601   30158 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0613 12:29:54.707715   30158 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0613 12:29:54.707839   30158 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0613 12:29:54.707938   30158 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0613 12:29:54.708089   30158 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0613 12:29:54.708240   30158 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0613 12:29:54.708341   30158 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0613 12:29:54.708404   30158 kubeadm.go:322] [certs] Using the existing "sa" key
	I0613 12:29:54.708466   30158 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:29:54.708525   30158 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:29:54.708589   30158 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:29:54.708660   30158 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:29:54.708714   30158 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:29:54.708793   30158 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:29:54.772121   30158 out.go:204]   - Booting up control plane ...
	I0613 12:29:54.772270   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:29:54.772394   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:29:54.772515   30158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:29:54.772660   30158 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:29:54.772888   30158 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:29:54.772966   30158 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:29:54.773109   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:29:54.773402   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:29:54.773535   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:29:54.773828   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:29:54.773944   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:29:54.774221   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:29:54.774341   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:29:54.774609   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:29:54.774695   30158 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:29:54.774892   30158 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:29:54.774905   30158 kubeadm.go:322] 
	I0613 12:29:54.774943   30158 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:29:54.774995   30158 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:29:54.775003   30158 kubeadm.go:322] 
	I0613 12:29:54.775039   30158 kubeadm.go:322] This error is likely caused by:
	I0613 12:29:54.775072   30158 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:29:54.775176   30158 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:29:54.775183   30158 kubeadm.go:322] 
	I0613 12:29:54.775286   30158 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:29:54.775339   30158 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:29:54.775374   30158 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:29:54.775384   30158 kubeadm.go:322] 
	I0613 12:29:54.775506   30158 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:29:54.775608   30158 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:29:54.775713   30158 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:29:54.775760   30158 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:29:54.775843   30158 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:29:54.775889   30158 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:29:54.775916   30158 kubeadm.go:406] StartCluster complete in 3m54.915418033s
	I0613 12:29:54.776020   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:29:54.797827   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.797841   30158 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:29:54.797909   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:29:54.818641   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.818654   30158 logs.go:286] No container was found matching "etcd"
	I0613 12:29:54.818723   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:29:54.838342   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.838356   30158 logs.go:286] No container was found matching "coredns"
	I0613 12:29:54.838424   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:29:54.860168   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.860181   30158 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:29:54.860262   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:29:54.881013   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.881029   30158 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:29:54.881108   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:29:54.902707   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.902720   30158 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:29:54.902787   30158 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:29:54.923849   30158 logs.go:284] 0 containers: []
	W0613 12:29:54.923862   30158 logs.go:286] No container was found matching "kindnet"
	I0613 12:29:54.923870   30158 logs.go:123] Gathering logs for kubelet ...
	I0613 12:29:54.923878   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:29:54.966744   30158 logs.go:123] Gathering logs for dmesg ...
	I0613 12:29:54.966763   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:29:54.983996   30158 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:29:54.984011   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:29:55.056310   30158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:29:55.056328   30158 logs.go:123] Gathering logs for Docker ...
	I0613 12:29:55.056337   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:29:55.075570   30158 logs.go:123] Gathering logs for container status ...
	I0613 12:29:55.075585   30158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0613 12:29:55.137747   30158 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0613 12:29:55.137773   30158 out.go:239] * 
	* 
	W0613 12:29:55.137813   30158 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:29:55.137827   30158 out.go:239] * 
	* 
	W0613 12:29:55.138552   30158 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:29:55.217046   30158 out.go:177] 
	W0613 12:29:55.275319   30158 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:29:55.275363   30158 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0613 12:29:55.275389   30158 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0613 12:29:55.386421   30158 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-660000
version_upgrade_test.go:239: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-660000: (1.875042518s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-660000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-660000 status --format={{.Host}}: exit status 7 (101.629211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:255: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker : (4m33.055175263s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-660000 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (694.651609ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-660000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-660000
	    minikube start -p kubernetes-upgrade-660000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6600002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-660000 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:287: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-660000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker : (32.583003553s)
version_upgrade_test.go:291: *** TestKubernetesUpgrade FAILED at 2023-06-13 12:35:03.885733 -0700 PDT m=+3170.224925192
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-660000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-660000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5",
	        "Created": "2023-06-13T19:25:43.54184388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 605982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:29:58.865306401Z",
	            "FinishedAt": "2023-06-13T19:29:56.125505486Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5/hosts",
	        "LogPath": "/var/lib/docker/containers/bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5/bc20aa6573e81332698fcfacb673a2ea7a808bddf8aa24ecdbb3df49f62bb1c5-json.log",
	        "Name": "/kubernetes-upgrade-660000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-660000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-660000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c32e74ca949d85b1c7aaebe43aaddb4e494759076e4b6ad6c1491fa9796a86d-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c32e74ca949d85b1c7aaebe43aaddb4e494759076e4b6ad6c1491fa9796a86d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c32e74ca949d85b1c7aaebe43aaddb4e494759076e4b6ad6c1491fa9796a86d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c32e74ca949d85b1c7aaebe43aaddb4e494759076e4b6ad6c1491fa9796a86d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-660000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-660000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-660000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-660000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-660000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ea68d5cade0f92e1be17ece74096a72c356d70365864b45c33038275e2c97339",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58001"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58002"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58003"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58004"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58005"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ea68d5cade0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-660000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bc20aa6573e8",
	                        "kubernetes-upgrade-660000"
	                    ],
	                    "NetworkID": "1e98c322e59d75885a21aa602089f5376e8119a0042c8040d248137e3d4979a5",
	                    "EndpointID": "e4612d4140768f75ba871cd058bf11a96154db0f305b1f4189613d473587b7c1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-660000 -n kubernetes-upgrade-660000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-660000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-660000 logs -n 25: (3.158491376s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo docker                        | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo cat                           | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo                               | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo find                          | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-185000 sudo crio                          | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-185000                                    | kindnet-185000            | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT | 13 Jun 23 12:33 PDT |
	| start   | -p calico-185000 --memory=3072                       | calico-185000             | jenkins | v1.30.1 | 13 Jun 23 12:33 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-660000                         | kubernetes-upgrade-660000 | jenkins | v1.30.1 | 13 Jun 23 12:34 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-660000                         | kubernetes-upgrade-660000 | jenkins | v1.30.1 | 13 Jun 23 12:34 PDT | 13 Jun 23 12:35 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 12:34:31
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 12:34:31.357554   32956 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:34:31.357746   32956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:34:31.357753   32956 out.go:309] Setting ErrFile to fd 2...
	I0613 12:34:31.357758   32956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:34:31.357879   32956 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:34:31.360301   32956 out.go:303] Setting JSON to false
	I0613 12:34:31.382825   32956 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9242,"bootTime":1686675629,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:34:31.382928   32956 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:34:31.406974   32956 out.go:177] * [kubernetes-upgrade-660000] minikube v1.30.1 on Darwin 13.4
	I0613 12:34:31.464948   32956 notify.go:220] Checking for updates...
	I0613 12:34:31.485875   32956 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:34:31.559837   32956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:34:31.617842   32956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:34:31.659848   32956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:34:31.717786   32956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:34:31.775886   32956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:34:30.032153   32812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0613 12:34:30.532112   32812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0613 12:34:31.032454   32812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0613 12:34:31.532421   32812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0613 12:34:31.629595   32812 kubeadm.go:1081] duration metric: took 11.776570598s to wait for elevateKubeSystemPrivileges.
	I0613 12:34:31.629613   32812 kubeadm.go:406] StartCluster complete in 22.158457581s
	I0613 12:34:31.629630   32812 settings.go:142] acquiring lock: {Name:mkafbfcc19c3ab5c202e867761622546d4c1b734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:34:31.629724   32812 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:34:31.630419   32812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:34:31.639448   32812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0613 12:34:31.639500   32812 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0613 12:34:31.639566   32812 addons.go:66] Setting storage-provisioner=true in profile "calico-185000"
	I0613 12:34:31.639567   32812 addons.go:66] Setting default-storageclass=true in profile "calico-185000"
	I0613 12:34:31.639581   32812 config.go:182] Loaded profile config "calico-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:34:31.639583   32812 addons.go:228] Setting addon storage-provisioner=true in "calico-185000"
	I0613 12:34:31.639586   32812 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-185000"
	I0613 12:34:31.639622   32812 host.go:66] Checking if "calico-185000" exists ...
	I0613 12:34:31.639860   32812 cli_runner.go:164] Run: docker container inspect calico-185000 --format={{.State.Status}}
	I0613 12:34:31.639967   32812 cli_runner.go:164] Run: docker container inspect calico-185000 --format={{.State.Status}}
	I0613 12:34:31.808767   32812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0613 12:34:31.835977   32812 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:34:31.798593   32956 config.go:182] Loaded profile config "kubernetes-upgrade-660000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:34:31.799021   32956 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:34:31.879518   32956 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:34:31.879785   32956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:34:32.012978   32956 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:71 SystemTime:2023-06-13 19:34:32.001340169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:34:32.049895   32956 out.go:177] * Using the docker driver based on existing profile
	I0613 12:34:32.092018   32956 start.go:297] selected driver: docker
	I0613 12:34:32.092035   32956 start.go:884] validating driver "docker" against &{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-660000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:34:32.092188   32956 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:34:32.096106   32956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:34:32.211297   32956 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:71 SystemTime:2023-06-13 19:34:32.199726951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:34:32.211535   32956 cni.go:84] Creating CNI manager for ""
	I0613 12:34:32.211550   32956 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:34:32.211566   32956 start_flags.go:319] config:
	{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0613 12:34:32.233153   32956 out.go:177] * Starting control plane node kubernetes-upgrade-660000 in cluster kubernetes-upgrade-660000
	I0613 12:34:32.253878   32956 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 12:34:32.275060   32956 out.go:177] * Pulling base image ...
	I0613 12:34:31.857021   32812 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 12:34:31.857040   32812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0613 12:34:31.857154   32812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-185000
	I0613 12:34:31.861149   32812 addons.go:228] Setting addon default-storageclass=true in "calico-185000"
	I0613 12:34:31.861225   32812 host.go:66] Checking if "calico-185000" exists ...
	I0613 12:34:31.861671   32812 cli_runner.go:164] Run: docker container inspect calico-185000 --format={{.State.Status}}
	I0613 12:34:31.967061   32812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58458 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/calico-185000/id_rsa Username:docker}
	I0613 12:34:31.990338   32812 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0613 12:34:31.990357   32812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0613 12:34:31.990470   32812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-185000
	I0613 12:34:32.100361   32812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58458 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/calico-185000/id_rsa Username:docker}
	I0613 12:34:32.115525   32812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 12:34:32.234985   32812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0613 12:34:32.314849   32812 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-185000" context rescaled to 1 replicas
	I0613 12:34:32.314874   32812 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 12:34:32.335917   32812 out.go:177] * Verifying Kubernetes components...
	I0613 12:34:32.296043   32956 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 12:34:32.296064   32956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 12:34:32.296122   32956 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 12:34:32.296136   32956 cache.go:57] Caching tarball of preloaded images
	I0613 12:34:32.296261   32956 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 12:34:32.296275   32956 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0613 12:34:32.296962   32956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/config.json ...
	I0613 12:34:32.380914   32956 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 12:34:32.380932   32956 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 12:34:32.380964   32956 cache.go:195] Successfully downloaded all kic artifacts
	I0613 12:34:32.381012   32956 start.go:365] acquiring machines lock for kubernetes-upgrade-660000: {Name:mk952feff60a3e0d983b47946508aa79d68dd1c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 12:34:32.381110   32956 start.go:369] acquired machines lock for "kubernetes-upgrade-660000" in 77.56µs
	I0613 12:34:32.381136   32956 start.go:96] Skipping create...Using existing machine configuration
	I0613 12:34:32.381149   32956 fix.go:54] fixHost starting: 
	I0613 12:34:32.381409   32956 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:34:32.435662   32956 fix.go:102] recreateIfNeeded on kubernetes-upgrade-660000: state=Running err=<nil>
	W0613 12:34:32.435695   32956 fix.go:128] unexpected machine state, will restart: <nil>
	I0613 12:34:32.456979   32956 out.go:177] * Updating the running docker "kubernetes-upgrade-660000" container ...
	I0613 12:34:32.373098   32812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:34:32.932134   32812 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.096174281s)
	I0613 12:34:32.932176   32812 start.go:899] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0613 12:34:33.355193   32812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.239653016s)
	I0613 12:34:33.355205   32812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.120222454s)
	I0613 12:34:33.355366   32812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-185000
	I0613 12:34:33.382288   32812 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0613 12:34:33.403258   32812 addons.go:499] enable addons completed in 1.763770448s: enabled=[storage-provisioner default-storageclass]
	I0613 12:34:33.440920   32812 node_ready.go:35] waiting up to 15m0s for node "calico-185000" to be "Ready" ...
	I0613 12:34:33.445969   32812 node_ready.go:49] node "calico-185000" has status "Ready":"True"
	I0613 12:34:33.445988   32812 node_ready.go:38] duration metric: took 5.043453ms waiting for node "calico-185000" to be "Ready" ...
	I0613 12:34:33.445997   32812 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0613 12:34:33.457688   32812 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace to be "Ready" ...
	I0613 12:34:32.514935   32956 machine.go:88] provisioning docker machine ...
	I0613 12:34:32.515016   32956 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-660000"
	I0613 12:34:32.515144   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:32.580575   32956 main.go:141] libmachine: Using SSH client type: native
	I0613 12:34:32.581001   32956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 58001 <nil> <nil>}
	I0613 12:34:32.581017   32956 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-660000 && echo "kubernetes-upgrade-660000" | sudo tee /etc/hostname
	I0613 12:34:32.713765   32956 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-660000
	
	I0613 12:34:32.713907   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:32.776875   32956 main.go:141] libmachine: Using SSH client type: native
	I0613 12:34:32.777245   32956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 58001 <nil> <nil>}
	I0613 12:34:32.777259   32956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-660000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-660000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-660000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 12:34:32.900599   32956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:34:32.900620   32956 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 12:34:32.900649   32956 ubuntu.go:177] setting up certificates
	I0613 12:34:32.900662   32956 provision.go:83] configureAuth start
	I0613 12:34:32.900730   32956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-660000
	I0613 12:34:32.963708   32956 provision.go:138] copyHostCerts
	I0613 12:34:32.963859   32956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 12:34:32.963882   32956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 12:34:32.964029   32956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 12:34:32.964356   32956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 12:34:32.964366   32956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 12:34:32.964468   32956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 12:34:32.964773   32956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 12:34:32.964784   32956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 12:34:32.964896   32956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 12:34:32.965183   32956 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-660000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-660000]
	I0613 12:34:33.057870   32956 provision.go:172] copyRemoteCerts
	I0613 12:34:33.057941   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 12:34:33.058030   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:33.110523   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:34:33.207601   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 12:34:33.249156   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0613 12:34:33.276477   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0613 12:34:33.300227   32956 provision.go:86] duration metric: configureAuth took 399.552494ms
	I0613 12:34:33.300243   32956 ubuntu.go:193] setting minikube options for container-runtime
	I0613 12:34:33.300403   32956 config.go:182] Loaded profile config "kubernetes-upgrade-660000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:34:33.300472   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:33.406980   32956 main.go:141] libmachine: Using SSH client type: native
	I0613 12:34:33.407330   32956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 58001 <nil> <nil>}
	I0613 12:34:33.407341   32956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 12:34:33.540725   32956 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 12:34:33.540742   32956 ubuntu.go:71] root file system type: overlay
	I0613 12:34:33.540868   32956 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 12:34:33.540969   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:33.601133   32956 main.go:141] libmachine: Using SSH client type: native
	I0613 12:34:33.601488   32956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 58001 <nil> <nil>}
	I0613 12:34:33.601546   32956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 12:34:33.746286   32956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 12:34:33.746394   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:33.804849   32956 main.go:141] libmachine: Using SSH client type: native
	I0613 12:34:33.805194   32956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 58001 <nil> <nil>}
	I0613 12:34:33.805207   32956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 12:34:33.939310   32956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:34:33.939334   32956 machine.go:91] provisioned docker machine in 1.42441453s
	I0613 12:34:33.939349   32956 start.go:300] post-start starting for "kubernetes-upgrade-660000" (driver="docker")
	I0613 12:34:33.939364   32956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 12:34:33.939445   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 12:34:33.939529   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:33.997015   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:34:34.086563   32956 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 12:34:34.090995   32956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 12:34:34.091018   32956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 12:34:34.091026   32956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 12:34:34.091032   32956 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 12:34:34.091040   32956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 12:34:34.091127   32956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 12:34:34.091275   32956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 12:34:34.091451   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 12:34:34.100761   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:34:34.127268   32956 start.go:303] post-start completed in 187.89239ms
	I0613 12:34:34.127384   32956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:34:34.127444   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:34.187048   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:34:34.270403   32956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 12:34:34.276130   32956 fix.go:56] fixHost completed within 1.895026812s
	I0613 12:34:34.276147   32956 start.go:83] releasing machines lock for "kubernetes-upgrade-660000", held for 1.895072701s
	I0613 12:34:34.276240   32956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-660000
	I0613 12:34:34.327300   32956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 12:34:34.327300   32956 ssh_runner.go:195] Run: cat /version.json
	I0613 12:34:34.327380   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:34.327388   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:34.386779   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:34:34.386795   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:34:34.598708   32956 ssh_runner.go:195] Run: systemctl --version
	I0613 12:34:34.604027   32956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0613 12:34:34.609313   32956 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0613 12:34:34.609370   32956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0613 12:34:34.618502   32956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0613 12:34:34.627676   32956 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0613 12:34:34.627691   32956 start.go:464] detecting cgroup driver to use...
	I0613 12:34:34.627705   32956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:34:34.627812   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:34:34.644609   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0613 12:34:34.656192   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 12:34:34.670240   32956 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 12:34:34.670305   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 12:34:34.685196   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:34:34.697648   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 12:34:34.712506   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:34:34.724451   32956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 12:34:34.737236   32956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 12:34:34.749792   32956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 12:34:34.761551   32956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 12:34:34.771327   32956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:34:34.854425   32956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 12:34:35.523411   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:38.022821   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:40.024211   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:42.025347   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:44.027833   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:45.092043   32956 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.237832728s)
	I0613 12:34:45.092061   32956 start.go:464] detecting cgroup driver to use...
	I0613 12:34:45.092075   32956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:34:45.092139   32956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 12:34:45.107333   32956 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 12:34:45.107404   32956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 12:34:45.124866   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:34:45.151457   32956 ssh_runner.go:195] Run: which cri-dockerd
	I0613 12:34:45.161078   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 12:34:45.178895   32956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 12:34:45.200119   32956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 12:34:45.312186   32956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 12:34:45.422918   32956 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 12:34:45.422935   32956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 12:34:45.444665   32956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:34:45.531174   32956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:34:45.955059   32956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 12:34:46.033742   32956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0613 12:34:46.113407   32956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 12:34:46.208294   32956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:34:46.287385   32956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0613 12:34:46.310752   32956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:34:46.409292   32956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0613 12:34:46.564999   32956 start.go:511] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0613 12:34:46.565171   32956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0613 12:34:46.572189   32956 start.go:532] Will wait 60s for crictl version
	I0613 12:34:46.572254   32956 ssh_runner.go:195] Run: which crictl
	I0613 12:34:46.577745   32956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0613 12:34:46.640755   32956 start.go:548] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1
	I0613 12:34:46.640857   32956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:34:46.682293   32956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:34:46.523865   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:49.024180   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:46.731883   32956 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0613 12:34:46.732013   32956 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-660000 dig +short host.docker.internal
	I0613 12:34:46.869653   32956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 12:34:46.869787   32956 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 12:34:46.875291   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:46.986689   32956 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 12:34:46.986770   32956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:34:47.009690   32956 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:34:47.009714   32956 docker.go:566] Images already preloaded, skipping extraction
	I0613 12:34:47.009797   32956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:34:47.039140   32956 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:34:47.039188   32956 cache_images.go:84] Images are preloaded, skipping loading
	I0613 12:34:47.039281   32956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 12:34:47.099112   32956 cni.go:84] Creating CNI manager for ""
	I0613 12:34:47.099130   32956 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:34:47.099148   32956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 12:34:47.099167   32956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-660000 NodeName:kubernetes-upgrade-660000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0613 12:34:47.099302   32956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-660000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 12:34:47.099390   32956 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-660000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 12:34:47.099473   32956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0613 12:34:47.108751   32956 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 12:34:47.108812   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 12:34:47.120536   32956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0613 12:34:47.142252   32956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 12:34:47.164653   32956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0613 12:34:47.184472   32956 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0613 12:34:47.189287   32956 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000 for IP: 192.168.67.2
	I0613 12:34:47.189308   32956 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:34:47.189457   32956 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 12:34:47.189528   32956 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 12:34:47.189618   32956 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key
	I0613 12:34:47.189695   32956 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key.c7fa3a9e
	I0613 12:34:47.189757   32956 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key
	I0613 12:34:47.189990   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 12:34:47.190036   32956 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 12:34:47.190053   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 12:34:47.190087   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 12:34:47.190125   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 12:34:47.190157   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 12:34:47.190229   32956 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:34:47.190847   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 12:34:47.214106   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0613 12:34:47.253617   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 12:34:47.285704   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0613 12:34:47.309132   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 12:34:47.338144   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 12:34:47.367597   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 12:34:47.397132   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 12:34:47.421587   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 12:34:47.450020   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 12:34:47.482236   32956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 12:34:47.505806   32956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 12:34:47.529334   32956 ssh_runner.go:195] Run: openssl version
	I0613 12:34:47.537008   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 12:34:47.548483   32956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 12:34:47.554616   32956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 12:34:47.554684   32956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 12:34:47.564532   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 12:34:47.575967   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 12:34:47.587080   32956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 12:34:47.592037   32956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 12:34:47.592106   32956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 12:34:47.599651   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 12:34:47.609172   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 12:34:47.621417   32956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:34:47.626661   32956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:34:47.626717   32956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:34:47.634428   32956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 12:34:47.646439   32956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 12:34:47.651644   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0613 12:34:47.660770   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0613 12:34:47.669499   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0613 12:34:47.677644   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0613 12:34:47.686452   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0613 12:34:47.694260   32956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0613 12:34:47.702388   32956 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:34:47.702501   32956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:34:47.725010   32956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 12:34:47.734742   32956 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0613 12:34:47.734760   32956 kubeadm.go:636] restartCluster start
	I0613 12:34:47.734836   32956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0613 12:34:47.743736   32956 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:47.743815   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:34:47.798250   32956 kubeconfig.go:92] found "kubernetes-upgrade-660000" server: "https://127.0.0.1:58005"
	I0613 12:34:47.799024   32956 kapi.go:59] client config for kubernetes-upgrade-660000: &rest.Config{Host:"https://127.0.0.1:58005", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key", CAFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2580020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0613 12:34:47.799789   32956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0613 12:34:47.809582   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:47.809664   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:47.820643   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:48.320773   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:48.320879   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:48.334157   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:48.820739   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:48.820800   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:48.832093   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:49.320692   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:49.320763   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:49.338084   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:49.820789   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:49.820872   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:49.835976   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:50.320628   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:50.320735   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:50.332165   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:50.820730   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:50.820849   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:34:50.841276   32956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:51.321237   32956 api_server.go:166] Checking apiserver status ...
	I0613 12:34:51.321322   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:34:51.339206   32956 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/13372/cgroup
	I0613 12:34:51.033070   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:53.527182   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	W0613 12:34:51.360321   32956 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/13372/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:51.385137   32956 ssh_runner.go:195] Run: ls
	I0613 12:34:51.417150   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:34:53.363772   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 12:34:53.363805   32956 retry.go:31] will retry after 227.574355ms: https://127.0.0.1:58005/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 12:34:53.591440   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:34:53.597524   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:53.597547   32956 retry.go:31] will retry after 352.547915ms: https://127.0.0.1:58005/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:53.950187   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:34:53.956506   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:53.956527   32956 retry.go:31] will retry after 423.630038ms: https://127.0.0.1:58005/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:54.380201   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:34:54.385927   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:54.385948   32956 retry.go:31] will retry after 459.352087ms: https://127.0.0.1:58005/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:34:54.845455   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:34:54.852040   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 200:
	ok
	I0613 12:34:54.867069   32956 system_pods.go:86] 5 kube-system pods found
	I0613 12:34:54.867088   32956 system_pods.go:89] "etcd-kubernetes-upgrade-660000" [f7007f49-4e48-46c7-aa08-1fafea12786f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0613 12:34:54.867095   32956 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-660000" [54fc0291-ebb9-41b5-98dc-ea45f9c1b08d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0613 12:34:54.867105   32956 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-660000" [be2688c8-f8bc-4538-aa1e-267ec1240b7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0613 12:34:54.867114   32956 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-660000" [0e3fd9ef-9094-4374-b93b-c90d3fd82fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0613 12:34:54.867121   32956 system_pods.go:89] "storage-provisioner" [9d467811-cf27-4d38-ab6d-71494aa73b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0613 12:34:54.867127   32956 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0613 12:34:54.867134   32956 kubeadm.go:1128] stopping kube-system containers ...
	I0613 12:34:54.867206   32956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:34:54.890665   32956 docker.go:462] Stopping containers: [7deb4ee95ac2 b1f580300c60 af6005f0fe36 7b28173e70ad c06990b1c89b 929ed211e16d df7714f3e587 f977bfd9ce87 74f205e9e89b 6eb54b03550e 86f8eaf0d5d1 e1448073fae9 d6ccef2fc232 fcd9fc3cb492 855fc1da0871 bfc6fa20a18e]
	I0613 12:34:54.890750   32956 ssh_runner.go:195] Run: docker stop 7deb4ee95ac2 b1f580300c60 af6005f0fe36 7b28173e70ad c06990b1c89b 929ed211e16d df7714f3e587 f977bfd9ce87 74f205e9e89b 6eb54b03550e 86f8eaf0d5d1 e1448073fae9 d6ccef2fc232 fcd9fc3cb492 855fc1da0871 bfc6fa20a18e
	I0613 12:34:56.221533   32956 ssh_runner.go:235] Completed: docker stop 7deb4ee95ac2 b1f580300c60 af6005f0fe36 7b28173e70ad c06990b1c89b 929ed211e16d df7714f3e587 f977bfd9ce87 74f205e9e89b 6eb54b03550e 86f8eaf0d5d1 e1448073fae9 d6ccef2fc232 fcd9fc3cb492 855fc1da0871 bfc6fa20a18e: (1.330771327s)
	I0613 12:34:56.221624   32956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0613 12:34:56.292092   32956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:34:56.320439   32956 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 13 19:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 13 19:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jun 13 19:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 13 19:34 /etc/kubernetes/scheduler.conf
	
	I0613 12:34:56.320545   32956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0613 12:34:56.336772   32956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0613 12:34:56.352684   32956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0613 12:34:56.022617   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:58.027011   32812 pod_ready.go:102] pod "calico-kube-controllers-786b679988-282ts" in "kube-system" namespace has status "Ready":"False"
	I0613 12:34:56.378305   32956 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:56.385508   32956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0613 12:34:56.401748   32956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0613 12:34:56.414938   32956 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:34:56.415031   32956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0613 12:34:56.428538   32956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 12:34:56.443435   32956 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0613 12:34:56.443458   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:34:56.510898   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:34:57.050945   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:34:57.202856   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:34:57.269071   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:34:57.349130   32956 api_server.go:52] waiting for apiserver process to appear ...
	I0613 12:34:57.349237   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:34:57.868701   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:34:58.369185   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:34:58.868872   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:34:58.882833   32956 api_server.go:72] duration metric: took 1.53374576s to wait for apiserver process to appear ...
	I0613 12:34:58.882846   32956 api_server.go:88] waiting for apiserver healthz status ...
	I0613 12:34:58.882856   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:35:00.678054   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0613 12:35:00.678095   32956 api_server.go:103] status: https://127.0.0.1:58005/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 12:35:01.178871   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:35:01.184738   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 12:35:01.184751   32956 api_server.go:103] status: https://127.0.0.1:58005/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:35:01.678873   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:35:01.684653   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 12:35:01.684672   32956 api_server.go:103] status: https://127.0.0.1:58005/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 12:35:02.178219   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:35:02.184492   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 200:
	ok
	I0613 12:35:02.193569   32956 api_server.go:141] control plane version: v1.27.2
	I0613 12:35:02.193589   32956 api_server.go:131] duration metric: took 3.310810865s to wait for apiserver health ...
	I0613 12:35:02.193599   32956 cni.go:84] Creating CNI manager for ""
	I0613 12:35:02.193607   32956 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:35:02.217214   32956 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0613 12:35:02.256454   32956 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0613 12:35:02.267919   32956 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0613 12:35:02.287946   32956 system_pods.go:43] waiting for kube-system pods to appear ...
	I0613 12:35:02.295221   32956 system_pods.go:59] 5 kube-system pods found
	I0613 12:35:02.295236   32956 system_pods.go:61] "etcd-kubernetes-upgrade-660000" [f7007f49-4e48-46c7-aa08-1fafea12786f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0613 12:35:02.295243   32956 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-660000" [54fc0291-ebb9-41b5-98dc-ea45f9c1b08d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0613 12:35:02.295267   32956 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-660000" [be2688c8-f8bc-4538-aa1e-267ec1240b7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0613 12:35:02.295295   32956 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-660000" [0e3fd9ef-9094-4374-b93b-c90d3fd82fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0613 12:35:02.295305   32956 system_pods.go:61] "storage-provisioner" [9d467811-cf27-4d38-ab6d-71494aa73b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0613 12:35:02.295311   32956 system_pods.go:74] duration metric: took 7.353848ms to wait for pod list to return data ...
	I0613 12:35:02.295317   32956 node_conditions.go:102] verifying NodePressure condition ...
	I0613 12:35:02.298596   32956 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0613 12:35:02.298614   32956 node_conditions.go:123] node cpu capacity is 6
	I0613 12:35:02.298623   32956 node_conditions.go:105] duration metric: took 3.302853ms to run NodePressure ...
	I0613 12:35:02.298634   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:35:02.440937   32956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0613 12:35:02.450022   32956 ops.go:34] apiserver oom_adj: -16
	I0613 12:35:02.450037   32956 kubeadm.go:640] restartCluster took 14.715602707s
	I0613 12:35:02.450042   32956 kubeadm.go:406] StartCluster complete in 14.747999309s
	I0613 12:35:02.450060   32956 settings.go:142] acquiring lock: {Name:mkafbfcc19c3ab5c202e867761622546d4c1b734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:35:02.450156   32956 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:35:02.450867   32956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:35:02.451223   32956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0613 12:35:02.451263   32956 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0613 12:35:02.451358   32956 addons.go:66] Setting storage-provisioner=true in profile "kubernetes-upgrade-660000"
	I0613 12:35:02.451368   32956 addons.go:66] Setting default-storageclass=true in profile "kubernetes-upgrade-660000"
	I0613 12:35:02.451372   32956 addons.go:228] Setting addon storage-provisioner=true in "kubernetes-upgrade-660000"
	W0613 12:35:02.451379   32956 addons.go:237] addon storage-provisioner should already be in state true
	I0613 12:35:02.451400   32956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-660000"
	I0613 12:35:02.451402   32956 config.go:182] Loaded profile config "kubernetes-upgrade-660000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:35:02.451418   32956 host.go:66] Checking if "kubernetes-upgrade-660000" exists ...
	I0613 12:35:02.451724   32956 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:35:02.451831   32956 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:35:02.451866   32956 kapi.go:59] client config for kubernetes-upgrade-660000: &rest.Config{Host:"https://127.0.0.1:58005", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key", CAFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2580020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0613 12:35:02.458486   32956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-660000" context rescaled to 1 replicas
	I0613 12:35:02.458519   32956 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 12:35:02.499283   32956 out.go:177] * Verifying Kubernetes components...
	I0613 12:35:02.573287   32956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:35:02.582203   32956 start.go:872] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0613 12:35:02.582935   32956 kapi.go:59] client config for kubernetes-upgrade-660000: &rest.Config{Host:"https://127.0.0.1:58005", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubernetes-upgrade-660000/client.key", CAFile:"/Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2580020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0613 12:35:02.589408   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:35:02.603175   32956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:35:02.613979   32956 addons.go:228] Setting addon default-storageclass=true in "kubernetes-upgrade-660000"
	W0613 12:35:02.623969   32956 addons.go:237] addon default-storageclass should already be in state true
	I0613 12:35:02.624014   32956 host.go:66] Checking if "kubernetes-upgrade-660000" exists ...
	I0613 12:35:02.624205   32956 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 12:35:02.624240   32956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0613 12:35:02.624318   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:35:02.624941   32956 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-660000 --format={{.State.Status}}
	I0613 12:35:02.661693   32956 api_server.go:52] waiting for apiserver process to appear ...
	I0613 12:35:02.661768   32956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:35:02.677072   32956 api_server.go:72] duration metric: took 218.510653ms to wait for apiserver process to appear ...
	I0613 12:35:02.677114   32956 api_server.go:88] waiting for apiserver healthz status ...
	I0613 12:35:02.677137   32956 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58005/healthz ...
	I0613 12:35:02.685004   32956 api_server.go:279] https://127.0.0.1:58005/healthz returned 200:
	ok
	I0613 12:35:02.687125   32956 api_server.go:141] control plane version: v1.27.2
	I0613 12:35:02.687147   32956 api_server.go:131] duration metric: took 10.023612ms to wait for apiserver health ...
	I0613 12:35:02.687157   32956 system_pods.go:43] waiting for kube-system pods to appear ...
	I0613 12:35:02.687255   32956 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0613 12:35:02.687267   32956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0613 12:35:02.687342   32956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-660000
	I0613 12:35:02.687389   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:35:02.693168   32956 system_pods.go:59] 5 kube-system pods found
	I0613 12:35:02.693197   32956 system_pods.go:61] "etcd-kubernetes-upgrade-660000" [f7007f49-4e48-46c7-aa08-1fafea12786f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0613 12:35:02.693207   32956 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-660000" [54fc0291-ebb9-41b5-98dc-ea45f9c1b08d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0613 12:35:02.693219   32956 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-660000" [be2688c8-f8bc-4538-aa1e-267ec1240b7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0613 12:35:02.693226   32956 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-660000" [0e3fd9ef-9094-4374-b93b-c90d3fd82fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0613 12:35:02.693231   32956 system_pods.go:61] "storage-provisioner" [9d467811-cf27-4d38-ab6d-71494aa73b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0613 12:35:02.693236   32956 system_pods.go:74] duration metric: took 6.063743ms to wait for pod list to return data ...
	I0613 12:35:02.693243   32956 kubeadm.go:581] duration metric: took 234.7086ms to wait for : map[apiserver:true system_pods:true] ...
	I0613 12:35:02.693253   32956 node_conditions.go:102] verifying NodePressure condition ...
	I0613 12:35:02.697105   32956 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0613 12:35:02.697119   32956 node_conditions.go:123] node cpu capacity is 6
	I0613 12:35:02.697130   32956 node_conditions.go:105] duration metric: took 3.873106ms to run NodePressure ...
	I0613 12:35:02.697138   32956 start.go:228] waiting for startup goroutines ...
	I0613 12:35:02.744498   32956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58001 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/kubernetes-upgrade-660000/id_rsa Username:docker}
	I0613 12:35:02.812826   32956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 12:35:02.879087   32956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0613 12:35:03.620829   32956 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0613 12:35:03.640559   32956 addons.go:499] enable addons completed in 1.189322625s: enabled=[storage-provisioner default-storageclass]
	I0613 12:35:03.640584   32956 start.go:233] waiting for cluster config update ...
	I0613 12:35:03.640603   32956 start.go:242] writing updated cluster config ...
	I0613 12:35:03.641082   32956 ssh_runner.go:195] Run: rm -f paused
	I0613 12:35:03.682363   32956 start.go:582] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0613 12:35:03.703828   32956 out.go:177] 
	W0613 12:35:03.725927   32956 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0613 12:35:03.746700   32956 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0613 12:35:03.809860   32956 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-660000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jun 13 19:34:46 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:46Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jun 13 19:34:46 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:46Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jun 13 19:34:46 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:46Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jun 13 19:34:46 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:46Z" level=info msg="Start cri-dockerd grpc backend"
	Jun 13 19:34:46 kubernetes-upgrade-660000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jun 13 19:34:50 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c06990b1c89bac0ee0c7eca4fcc8ba0f766e73170a56de956f5633b43219033e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:50 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df7714f3e5877ba51cdd37dc1978119ac3fcb43fa06899fbe838ba792a2dff2a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:50 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/929ed211e16db7c4d302aca817dd2f4865aafeb67a742b82bb8e40d51ed42c7b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:50 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f977bfd9ce876dfa5585c6afe5c7df85a13f388fe9138335f6371613d1e9ae2f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.013856266Z" level=info msg="ignoring event" container=929ed211e16db7c4d302aca817dd2f4865aafeb67a742b82bb8e40d51ed42c7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.013903304Z" level=info msg="ignoring event" container=c06990b1c89bac0ee0c7eca4fcc8ba0f766e73170a56de956f5633b43219033e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.015847627Z" level=info msg="ignoring event" container=f977bfd9ce876dfa5585c6afe5c7df85a13f388fe9138335f6371613d1e9ae2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.024480120Z" level=info msg="ignoring event" container=7b28173e70ad6bed07d2909afae83220be28bcaf130b1f72d1a8f81973ac8790 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.037689011Z" level=info msg="ignoring event" container=af6005f0fe36350b5c45f19c9ab8761b112fc35c8a0d12ee6d80784a15afec9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.043366394Z" level=info msg="ignoring event" container=df7714f3e5877ba51cdd37dc1978119ac3fcb43fa06899fbe838ba792a2dff2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:55 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:55.052111608Z" level=info msg="ignoring event" container=b1f580300c6007438ab5d906aa535c7f0a00e00b4449ed9c79d355cfcc8274a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:56 kubernetes-upgrade-660000 dockerd[12427]: time="2023-06-13T19:34:56.167145420Z" level=info msg="ignoring event" container=7deb4ee95ac211135ba2f8543a344139bc7c18a3aa776c39d77533819c37b647 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4c73d36988dbbd37c37e244ac9434d2f90c4d739e90ccd4912f4f945e5d25eb/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: W0613 19:34:56.346911   12713 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3bac151c154faf765e66ed6d55d7ad9bfead2717aa7ceee2b4dc547bc76034ab/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: W0613 19:34:56.347737   12713 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de455331700863ff0a07d92f08adf0749604eace92096e2126b778aacc64de7c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: W0613 19:34:56.353829   12713 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: time="2023-06-13T19:34:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a4dde1590e81ed121e68d210f173835cfe4d21bfe959930502c235c7f50b227/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:34:56 kubernetes-upgrade-660000 cri-dockerd[12713]: W0613 19:34:56.450723   12713 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a81b0623a5473       c5b13e4f7806d       7 seconds ago       Running             kube-apiserver            2                   0a4dde1590e81       kube-apiserver-kubernetes-upgrade-660000
	79bff6de4bbcc       ac2b7465ebba9       7 seconds ago       Running             kube-controller-manager   2                   de45533170086       kube-controller-manager-kubernetes-upgrade-660000
	002da125729d8       89e70da428d29       8 seconds ago       Running             kube-scheduler            2                   3bac151c154fa       kube-scheduler-kubernetes-upgrade-660000
	64888b614986b       86b6af7dd652c       8 seconds ago       Running             etcd                      2                   b4c73d36988db       etcd-kubernetes-upgrade-660000
	7deb4ee95ac21       c5b13e4f7806d       15 seconds ago      Exited              kube-apiserver            1                   f977bfd9ce876       kube-apiserver-kubernetes-upgrade-660000
	b1f580300c600       89e70da428d29       15 seconds ago      Exited              kube-scheduler            1                   929ed211e16db       kube-scheduler-kubernetes-upgrade-660000
	af6005f0fe363       86b6af7dd652c       15 seconds ago      Exited              etcd                      1                   df7714f3e5877       etcd-kubernetes-upgrade-660000
	7b28173e70ad6       ac2b7465ebba9       15 seconds ago      Exited              kube-controller-manager   1                   c06990b1c89ba       kube-controller-manager-kubernetes-upgrade-660000
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-660000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-660000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c15b73c23708ade81a1f4f9397c0d397d78bc358
	                    minikube.k8s.io/name=kubernetes-upgrade-660000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_13T12_34_28_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Jun 2023 19:34:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-660000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Jun 2023 19:35:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Jun 2023 19:35:00 +0000   Tue, 13 Jun 2023 19:34:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Jun 2023 19:35:00 +0000   Tue, 13 Jun 2023 19:34:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Jun 2023 19:35:00 +0000   Tue, 13 Jun 2023 19:34:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Jun 2023 19:35:00 +0000   Tue, 13 Jun 2023 19:34:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-660000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 c814e693981345fcb195a894e3318446
	  System UUID:                c814e693981345fcb195a894e3318446
	  Boot ID:                    4dbd5daa-576e-4d10-b041-1b9ba2805377
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-660000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kube-apiserver-kubernetes-upgrade-660000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-660000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-660000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 37s   kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  37s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s   kubelet  Node kubernetes-upgrade-660000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet  Node kubernetes-upgrade-660000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet  Node kubernetes-upgrade-660000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                32s   kubelet  Node kubernetes-upgrade-660000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jun13 18:53] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=00000000d363e2eb
	[  +0.000061] FS-Cache: O-key=[8] 'bd08550500000000'
	[  +0.000048] FS-Cache: N-cookie c=00000020 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000061] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bd643532
	[  +0.000058] FS-Cache: N-key=[8] 'bd08550500000000'
	[  +2.600795] FS-Cache: Duplicate cookie detected
	[  +0.000045] FS-Cache: O-cookie c=0000001a [p=00000017 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=00000000b07f0146
	[  +0.000050] FS-Cache: O-key=[8] 'bc08550500000000'
	[  +0.000051] FS-Cache: N-cookie c=00000023 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000068] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bd643532
	[  +0.000070] FS-Cache: N-key=[8] 'bc08550500000000'
	[  +0.369338] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000001d [p=00000017 fl=226 nc=0 na=1]
	[  +0.000104] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=000000000e272457
	[  +0.000084] FS-Cache: O-key=[8] 'd808550500000000'
	[  +0.000058] FS-Cache: N-cookie c=00000024 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000035] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bae3f97e
	[  +0.000090] FS-Cache: N-key=[8] 'd808550500000000'
	
	* 
	* ==> etcd [64888b614986] <==
	* {"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-13T19:34:57.970Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-13T19:34:57.971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-06-13T19:34:57.971Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-06-13T19:34:57.971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:34:57.971Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-06-13T19:34:59.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-06-13T19:34:59.452Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-660000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-13T19:34:59.452Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:34:59.453Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-13T19:34:59.453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-13T19:34:59.452Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:34:59.454Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-06-13T19:34:59.454Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [af6005f0fe36] <==
	* {"level":"info","ts":"2023-06-13T19:34:51.157Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-13T19:34:51.157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-06-13T19:34:51.157Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-06-13T19:34:51.157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:34:51.157Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:52.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-06-13T19:34:52.349Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-660000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-13T19:34:52.349Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:34:52.349Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:34:52.350Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-13T19:34:52.350Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-13T19:34:52.350Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-13T19:34:52.350Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-06-13T19:34:54.934Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-13T19:34:54.934Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-660000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-06-13T19:34:54.951Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-06-13T19:34:54.953Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-13T19:34:54.954Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-13T19:34:54.954Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-660000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:35:06 up  2:34,  0 users,  load average: 3.11, 1.84, 1.33
	Linux kubernetes-upgrade-660000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7deb4ee95ac2] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0613 19:34:55.948775       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0613 19:34:55.948813       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0613 19:34:55.948934       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [a81b0623a547] <==
	* I0613 19:35:00.672933       1 establishing_controller.go:76] Starting EstablishingController
	I0613 19:35:00.672977       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0613 19:35:00.673021       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0613 19:35:00.673065       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0613 19:35:00.673125       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0613 19:35:00.677007       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0613 19:35:00.677047       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0613 19:35:00.735081       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0613 19:35:00.747033       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0613 19:35:00.763886       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0613 19:35:00.765250       1 shared_informer.go:318] Caches are synced for configmaps
	I0613 19:35:00.766598       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0613 19:35:00.766646       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0613 19:35:00.766673       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0613 19:35:00.766706       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0613 19:35:00.766757       1 cache.go:39] Caches are synced for autoregister controller
	I0613 19:35:00.766907       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0613 19:35:00.778151       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0613 19:35:01.440909       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0613 19:35:01.670987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0613 19:35:02.381297       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0613 19:35:02.387992       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0613 19:35:02.410656       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0613 19:35:02.427241       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0613 19:35:02.432850       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [79bff6de4bbc] <==
	* I0613 19:35:02.773726       1 cleaner.go:82] Starting CSR cleaner controller
	E0613 19:35:02.781895       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0613 19:35:02.781938       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	I0613 19:35:02.781948       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0613 19:35:02.781953       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	I0613 19:35:02.814498       1 controllermanager.go:638] "Started controller" controller="daemonset"
	I0613 19:35:02.814673       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0613 19:35:02.814686       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0613 19:35:02.818007       1 controllermanager.go:638] "Started controller" controller="ttl-after-finished"
	I0613 19:35:02.818125       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0613 19:35:02.818136       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0613 19:35:02.825911       1 controllermanager.go:638] "Started controller" controller="endpointslice"
	I0613 19:35:02.826421       1 endpointslice_controller.go:252] Starting endpoint slice controller
	I0613 19:35:02.826480       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0613 19:35:02.829154       1 shared_informer.go:318] Caches are synced for tokens
	I0613 19:35:02.849349       1 controllermanager.go:638] "Started controller" controller="namespace"
	I0613 19:35:02.849478       1 namespace_controller.go:197] "Starting namespace controller"
	I0613 19:35:02.849510       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0613 19:35:02.882231       1 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling"
	I0613 19:35:02.882272       1 horizontal.go:200] "Starting HPA controller"
	I0613 19:35:02.882278       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0613 19:35:03.019357       1 controllermanager.go:638] "Started controller" controller="disruption"
	I0613 19:35:03.019611       1 disruption.go:423] Sending events to api server.
	I0613 19:35:03.019685       1 disruption.go:434] Starting disruption controller
	I0613 19:35:03.019691       1 shared_informer.go:311] Waiting for caches to sync for disruption
	
	* 
	* ==> kube-controller-manager [7b28173e70ad] <==
	* I0613 19:34:52.022784       1 serving.go:348] Generated self-signed cert in-memory
	I0613 19:34:52.344152       1 controllermanager.go:187] "Starting" version="v1.27.2"
	I0613 19:34:52.344197       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0613 19:34:52.345388       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0613 19:34:52.345489       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0613 19:34:52.345699       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0613 19:34:52.345912       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [002da125729d] <==
	* W0613 19:34:58.825119       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0613 19:34:58.827765       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0613 19:34:58.825145       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0613 19:34:58.827776       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0613 19:34:58.827047       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0613 19:34:58.827816       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0613 19:34:58.828135       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0613 19:34:58.828186       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0613 19:34:58.828410       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0613 19:34:58.828490       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0613 19:35:00.737114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0613 19:35:00.737235       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0613 19:35:00.737591       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0613 19:35:00.739640       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0613 19:35:00.738418       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0613 19:35:00.739783       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0613 19:35:00.740610       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0613 19:35:00.740755       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0613 19:35:00.740905       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0613 19:35:00.740968       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0613 19:35:00.741059       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0613 19:35:00.741116       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0613 19:35:00.741185       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0613 19:35:00.741235       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0613 19:35:00.822325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b1f580300c60] <==
	* I0613 19:34:52.053237       1 serving.go:348] Generated self-signed cert in-memory
	I0613 19:34:53.439358       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0613 19:34:53.439407       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0613 19:34:53.443855       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0613 19:34:53.443939       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0613 19:34:53.443979       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0613 19:34:53.444015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0613 19:34:53.445653       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0613 19:34:53.445667       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0613 19:34:53.445731       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0613 19:34:53.445744       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0613 19:34:53.544731       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0613 19:34:53.546280       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0613 19:34:53.547109       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0613 19:34:54.972430       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0613 19:34:54.972497       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0613 19:34:54.972779       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0613 19:34:54.972830       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	I0613 19:34:54.973104       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0613 19:34:54.973235       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0613 19:34:54.973281       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.086146   13916 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-660000"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: W0613 19:34:58.244110   13916 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.244211   13916 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: W0613 19:34:58.447801   13916 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.447850   13916 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:34:58.492848   13916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06990b1c89bac0ee0c7eca4fcc8ba0f766e73170a56de956f5633b43219033e"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:34:58.515851   13916 scope.go:115] "RemoveContainer" containerID="7b28173e70ad6bed07d2909afae83220be28bcaf130b1f72d1a8f81973ac8790"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:34:58.516694   13916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f977bfd9ce876dfa5585c6afe5c7df85a13f388fe9138335f6371613d1e9ae2f"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:34:58.534425   13916 scope.go:115] "RemoveContainer" containerID="7deb4ee95ac211135ba2f8543a344139bc7c18a3aa776c39d77533819c37b647"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: W0613 19:34:58.612178   13916 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.612270   13916 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: W0613 19:34:58.637256   13916 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-660000&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.637357   13916 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-660000&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:34:58.764226   13916 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-660000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Jun 13 19:34:58 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:34:58.924903   13916 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-660000"
	Jun 13 19:35:00 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:35:00.815083   13916 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-660000"
	Jun 13 19:35:00 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:35:00.815149   13916 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-660000"
	Jun 13 19:35:00 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:35:00.832277   13916 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-660000\" not found"
	Jun 13 19:35:00 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:35:00.932441   13916 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-660000\" not found"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:35:01.033551   13916 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-660000\" not found"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:35:01.134423   13916 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-660000\" not found"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:35:01.337536   13916 apiserver.go:52] "Watching apiserver"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:35:01.349299   13916 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: I0613 19:35:01.430101   13916 reconciler.go:41] "Reconciler: start to sync state"
	Jun 13 19:35:01 kubernetes-upgrade-660000 kubelet[13916]: E0613 19:35:01.567875   13916 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-660000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-660000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-660000 -n kubernetes-upgrade-660000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-660000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-660000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-660000 describe pod storage-provisioner: exit status 1 (53.508391ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-660000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-660000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-660000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-660000: (2.646776083s)
--- FAIL: TestKubernetesUpgrade (574.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (62.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker 
E0613 12:24:41.890495   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:25:19.518516   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.523565   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.534918   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.556946   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.597656   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.679070   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:19.839180   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:20.159694   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:20.801116   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:25:22.081262   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker : exit status 78 (46.068883494s)

                                                
                                                
-- stdout --
	* [missing-upgrade-138000] minikube v1.9.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-138000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-138000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.25 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.28 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.97 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.60 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.49 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.52 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 102.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 134.85 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.99 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.82 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.57 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 179.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.21 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.46 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 225.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.32 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 274.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 344.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.07 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 462.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.10 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.90 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:25:02.352513764 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-138000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:25:20.865485873 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker 
E0613 12:25:24.643415   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker : exit status 70 (3.814012709s)

                                                
                                                
-- stdout --
	* [missing-upgrade-138000] minikube v1.9.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-138000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-138000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0613 12:25:29.763952   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.977569432.exe start -p missing-upgrade-138000 --memory=2200 --driver=docker : exit status 70 (3.67330089s)

                                                
                                                
-- stdout --
	* [missing-upgrade-138000] minikube v1.9.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-138000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-138000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-06-13 12:25:33.687112 -0700 PDT m=+2600.013336218
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-138000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-138000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69",
	        "Created": "2023-06-13T19:25:10.379112998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 581822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:25:10.564511874Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69/hostname",
	        "HostsPath": "/var/lib/docker/containers/e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69/hosts",
	        "LogPath": "/var/lib/docker/containers/e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69/e87cf2ce55855b137fc55c5604ad34bb088655a1d92178925794425689cc1d69-json.log",
	        "Name": "/missing-upgrade-138000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-138000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f2ec373cc45cb37e585a1e964e64107cad4aa8e7b4828313b2e17b20c33cfd66-init/diff:/var/lib/docker/overlay2/bd5d4e5d6e29c152b90cc5ee62014be7bf0ed8e72fbfc9c7f9d15907f6937366/diff:/var/lib/docker/overlay2/629ea90b7463c44281af8facf94b99babf8e2f1a8bbba249ecf4d4a38c077053/diff:/var/lib/docker/overlay2/43a89669bb434d69c16e207115316605e5581a673d267bec603763ca10ae7860/diff:/var/lib/docker/overlay2/6229c8a21fa06566af80ac84eed7dfcfac77aad05af2760837e2fa4f38f3bb81/diff:/var/lib/docker/overlay2/5fe59bf4fca86d2dd693e1b57f40200c9eae3e6af67c52316a9fa227a4efecaa/diff:/var/lib/docker/overlay2/670330be30ea6e867aceedf881c6c81989187a97bfe74bbce21c19d44bbc94c9/diff:/var/lib/docker/overlay2/ae9e860167c87dfae15a19e81c9107ff5c96a3784daedb66b95adbbdaba7c25e/diff:/var/lib/docker/overlay2/bb5e1f22d8511b73f8231e723aefbb454a251d1f53feab386772e2e19a240058/diff:/var/lib/docker/overlay2/0a5910f81daa90fe43ce920e2d6ccba890d3672d2235b8b877238f7f829d500b/diff:/var/lib/docker/overlay2/d33235
242748f221d8d97731d76bb2c1aaadcad7be0c63d71f03c420cf5eb37d/diff:/var/lib/docker/overlay2/979a9678f96c73c005ec310abc94c968661a127a12b9eba26ceb218f0f662dce/diff:/var/lib/docker/overlay2/d41e71ca29e1184a624bbaf7a17ca27724209e175998e98d0d17fde6000b371d/diff:/var/lib/docker/overlay2/4b4aaf81bb876aa687125d1b2894767b67f08af2502a14b474ae85ef0fe63b69/diff:/var/lib/docker/overlay2/71b4d602da9337e8077972fff4a79248039c9c69d753d7f0108b872b732610f6/diff:/var/lib/docker/overlay2/79708989956ebd16e975d67910844b03d5c881441f813727f7489eda6c264df1/diff:/var/lib/docker/overlay2/1e31811a33ddb038a79f67fe4eaf9df0bab36984ad6295a3274a06abbb3c7cb4/diff:/var/lib/docker/overlay2/8f20a1e9b92d450879b34af4439556841635e88546372c652c4dd0b0779d874e/diff:/var/lib/docker/overlay2/d2d7dda6a90274cf2aed78112a265a069871fa702a8f5cfe89c62fcdbb532975/diff:/var/lib/docker/overlay2/111cadc0bbbcfe2d59657a70bd899942e4652188868b70c5968af9e77f99be2f/diff:/var/lib/docker/overlay2/de200cb230ab4e7d17c2e0cce405051fa7aab9233e9316629237ed9dff7a36ba/diff:/var/lib/d
ocker/overlay2/f7e359c04e5c9655c68543b182a5e47cf9a29012e1a8be825737c6fe57e7d3d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2ec373cc45cb37e585a1e964e64107cad4aa8e7b4828313b2e17b20c33cfd66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2ec373cc45cb37e585a1e964e64107cad4aa8e7b4828313b2e17b20c33cfd66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2ec373cc45cb37e585a1e964e64107cad4aa8e7b4828313b2e17b20c33cfd66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-138000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-138000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-138000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-138000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-138000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5140c1106a01faf96108118c34d803933b1f0c02de9238764ba4c0e0ecc974fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57747"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57748"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57749"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5140c1106a01",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "208f663b2bef947a70722f36711184bd65840aa09aa333912f9cfeceb19457fb",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f92755f08c7bc9f387e31ff696d704be3f498e82a4b89101efe28d5f4f3be670",
	                    "EndpointID": "208f663b2bef947a70722f36711184bd65840aa09aa333912f9cfeceb19457fb",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-138000 -n missing-upgrade-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-138000 -n missing-upgrade-138000: exit status 6 (346.916928ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:25:34.073945   30124 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-138000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-138000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-138000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-138000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-138000: (2.24744902s)
--- FAIL: TestMissingContainerUpgrade (62.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker 
E0613 12:26:38.835976   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:26:41.443739   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:26:42.266738   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker : exit status 70 (45.839958971s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-326000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1621381359
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:26:52.576668857 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-326000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:27:11.540826962 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-326000", then "minikube start -p stopped-upgrade-326000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.82 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.40 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.49 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 45.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.24 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.43 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 95.18 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 104.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.15 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.07 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 225.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 237.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.68 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 384.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.57 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.21 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 487.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 509.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:27:11.540826962 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker : exit status 70 (4.143175332s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-326000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig4253811307
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-326000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3324457191.exe start -p stopped-upgrade-326000 --memory=2200 --vm-driver=docker : exit status 70 (4.057589949s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-326000] minikube v1.9.0 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1326479782
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-326000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:201: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (56.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-906000 --driver=docker 
E0613 12:29:45.319602   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-906000 --driver=docker : exit status 80 (25.186025494s)

                                                
                                                
-- stdout --
	* [NoKubernetes-906000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node NoKubernetes-906000 in cluster NoKubernetes-906000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: kubernetes client: client config: client config: context "NoKubernetes-906000" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-906000 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-906000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-906000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166",
	        "Created": "2023-06-13T19:29:40.007488537Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 602725,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:29:40.221880725Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166/hostname",
	        "HostsPath": "/var/lib/docker/containers/11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166/hosts",
	        "LogPath": "/var/lib/docker/containers/11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166/11760139e2df929f965bcba589a7aa4439a4bbb09711552f41a1522d1402f166-json.log",
	        "Name": "/NoKubernetes-906000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-906000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-906000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 6181355520,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6181355520,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8a8fa5d8e1feb7bfca57069e483b529791e1ccb028b34e5860a6ca7d8572c4a-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8a8fa5d8e1feb7bfca57069e483b529791e1ccb028b34e5860a6ca7d8572c4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8a8fa5d8e1feb7bfca57069e483b529791e1ccb028b34e5860a6ca7d8572c4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8a8fa5d8e1feb7bfca57069e483b529791e1ccb028b34e5860a6ca7d8572c4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-906000",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-906000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-906000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-906000",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-906000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8794007a62ac9f3016dc40fad753129d6309976004037758730da21a0d167af",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8794007a62a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-906000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "11760139e2df",
	                        "NoKubernetes-906000"
	                    ],
	                    "NetworkID": "be85a9f3e84a0a19ebf66151e29ac28ca65c36ba21dd568aacad5f04cb7c4682",
	                    "EndpointID": "d634570b58919ff23c7508f00a9fa3bf5a3568bf8e34c78db7fb71741522cc88",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-906000 -n NoKubernetes-906000
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-906000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p NoKubernetes-906000 logs -n 25: (2.648234684s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithK8s logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-367000             | cert-expiration-367000    | jenkins | v1.30.1 | 13 Jun 23 12:22 PDT | 13 Jun 23 12:22 PDT |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| ssh     | docker-flags-016000 ssh               | docker-flags-016000       | jenkins | v1.30.1 | 13 Jun 23 12:22 PDT | 13 Jun 23 12:22 PDT |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-016000 ssh               | docker-flags-016000       | jenkins | v1.30.1 | 13 Jun 23 12:22 PDT | 13 Jun 23 12:22 PDT |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-016000                | docker-flags-016000       | jenkins | v1.30.1 | 13 Jun 23 12:22 PDT | 13 Jun 23 12:22 PDT |
	| start   | -p cert-options-953000                | cert-options-953000       | jenkins | v1.30.1 | 13 Jun 23 12:22 PDT | 13 Jun 23 12:23 PDT |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --apiserver-name=localhost            |                           |         |         |                     |                     |
	| ssh     | cert-options-953000 ssh               | cert-options-953000       | jenkins | v1.30.1 | 13 Jun 23 12:23 PDT | 13 Jun 23 12:23 PDT |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-953000 -- sudo        | cert-options-953000       | jenkins | v1.30.1 | 13 Jun 23 12:23 PDT | 13 Jun 23 12:23 PDT |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-953000                | cert-options-953000       | jenkins | v1.30.1 | 13 Jun 23 12:23 PDT | 13 Jun 23 12:23 PDT |
	| delete  | -p running-upgrade-426000             | running-upgrade-426000    | jenkins | v1.30.1 | 13 Jun 23 12:24 PDT | 13 Jun 23 12:24 PDT |
	| delete  | -p missing-upgrade-138000             | missing-upgrade-138000    | jenkins | v1.30.1 | 13 Jun 23 12:25 PDT | 13 Jun 23 12:25 PDT |
	| start   | -p kubernetes-upgrade-660000          | kubernetes-upgrade-660000 | jenkins | v1.30.1 | 13 Jun 23 12:25 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| start   | -p cert-expiration-367000             | cert-expiration-367000    | jenkins | v1.30.1 | 13 Jun 23 12:25 PDT | 13 Jun 23 12:26 PDT |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-367000             | cert-expiration-367000    | jenkins | v1.30.1 | 13 Jun 23 12:26 PDT | 13 Jun 23 12:26 PDT |
	| delete  | -p stopped-upgrade-326000             | stopped-upgrade-326000    | jenkins | v1.30.1 | 13 Jun 23 12:27 PDT | 13 Jun 23 12:27 PDT |
	| start   | -p pause-879000 --memory=2048         | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:27 PDT | 13 Jun 23 12:28 PDT |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	| start   | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:28 PDT | 13 Jun 23 12:29 PDT |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| pause   | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-879000                       | pause-879000              | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	| start   | -p NoKubernetes-906000                | NoKubernetes-906000       | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-906000                | NoKubernetes-906000       | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-660000          | kubernetes-upgrade-660000 | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT | 13 Jun 23 12:29 PDT |
	| start   | -p kubernetes-upgrade-660000          | kubernetes-upgrade-660000 | jenkins | v1.30.1 | 13 Jun 23 12:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 12:29:57
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 12:29:57.547857   31252 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:29:57.548019   31252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:29:57.548024   31252 out.go:309] Setting ErrFile to fd 2...
	I0613 12:29:57.548029   31252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:29:57.548146   31252 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:29:57.549698   31252 out.go:303] Setting JSON to false
	I0613 12:29:57.570974   31252 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8968,"bootTime":1686675629,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:29:57.571073   31252 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:29:57.593434   31252 out.go:177] * [kubernetes-upgrade-660000] minikube v1.30.1 on Darwin 13.4
	I0613 12:29:57.636048   31252 notify.go:220] Checking for updates...
	I0613 12:29:57.657972   31252 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:29:57.700743   31252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:29:57.742626   31252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:29:57.784752   31252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:29:57.826714   31252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:29:57.847859   31252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:29:57.869047   31252 config.go:182] Loaded profile config "kubernetes-upgrade-660000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0613 12:29:57.869535   31252 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:29:57.928715   31252 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:29:57.928829   31252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:29:58.035463   31252 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:29:58.021838915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:29:58.059528   31252 out.go:177] * Using the docker driver based on existing profile
	I0613 12:29:58.101941   31252 start.go:297] selected driver: docker
	I0613 12:29:58.101955   31252 start.go:884] validating driver "docker" against &{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-660000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:29:58.102049   31252 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:29:58.104889   31252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:29:58.233465   31252 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:29:58.191847641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:29:58.233755   31252 cni.go:84] Creating CNI manager for ""
	I0613 12:29:58.233774   31252 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:29:58.233794   31252 start_flags.go:319] config:
	{Name:kubernetes-upgrade-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-660000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	
	* 
	* ==> Docker <==
	* Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.147246703Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.153055124Z" level=info msg="Loading containers: start."
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.240347328Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.277356041Z" level=info msg="Loading containers: done."
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.286164023Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.286227032Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.317826250Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:29:44 NoKubernetes-906000 dockerd[1039]: time="2023-06-13T19:29:44.317925752Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:29:44 NoKubernetes-906000 systemd[1]: Started Docker Application Container Engine.
	Jun 13 19:29:44 NoKubernetes-906000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Start docker client with request timeout 0s"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Loaded network plugin cni"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Docker Info: &{ID:6423cb6a-592a-42e6-988d-26cdada5ab84 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:35 SystemTime:2023-06-13T19:29:44.773896261Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:5.15.49-linuxkit-pr OperatingSys
tem:Ubuntu 22.04.2 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002a4230 NCPU:6 MemTotal:6231715840 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:NoKubernetes-906000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLi
cense: DefaultAddressPools:[] Warnings:[]}"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jun 13 19:29:44 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:44Z" level=info msg="Start cri-dockerd grpc backend"
	Jun 13 19:29:44 NoKubernetes-906000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jun 13 19:29:49 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73a6edde1222caddcd400aae653ca09ebb1f2e87017846358848e3588c5c6df7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:29:49 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a05d75d6cdc92147be40011bd05125cbc74eb47ff980be31a71edcccc3db0349/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:29:49 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc23ef2da725bb3824aa1b87fe414098af4691984b7113f2edae31a71fe6b48c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jun 13 19:29:49 NoKubernetes-906000 cri-dockerd[1260]: time="2023-06-13T19:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7967047fa94b53039f4114bffdf2277f80de82579162641e7619b60aa196b89e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a4bb5fb8e332       86b6af7dd652c       10 seconds ago      Running             etcd                      0                   a05d75d6cdc92       etcd-nokubernetes-906000
	d7e3ba5743594       ac2b7465ebba9       10 seconds ago      Running             kube-controller-manager   0                   7967047fa94b5       kube-controller-manager-nokubernetes-906000
	2a7bd988f16d8       89e70da428d29       10 seconds ago      Running             kube-scheduler            0                   cc23ef2da725b       kube-scheduler-nokubernetes-906000
	df150bef03e3b       c5b13e4f7806d       10 seconds ago      Running             kube-apiserver            0                   73a6edde1222c       kube-apiserver-nokubernetes-906000
	
	* 
	* ==> describe nodes <==
	* Name:               nokubernetes-906000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=nokubernetes-906000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c15b73c23708ade81a1f4f9397c0d397d78bc358
	                    minikube.k8s.io/name=NoKubernetes-906000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_13T12_29_55_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Jun 2023 19:29:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  nokubernetes-906000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Jun 2023 19:29:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Jun 2023 19:29:55 +0000   Tue, 13 Jun 2023 19:29:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Jun 2023 19:29:55 +0000   Tue, 13 Jun 2023 19:29:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Jun 2023 19:29:55 +0000   Tue, 13 Jun 2023 19:29:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Jun 2023 19:29:55 +0000   Tue, 13 Jun 2023 19:29:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    nokubernetes-906000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8b959d77cad418a9ddc717ba2b36b9d
	  System UUID:                d8b959d77cad418a9ddc717ba2b36b9d
	  Boot ID:                    4dbd5daa-576e-4d10-b041-1b9ba2805377
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-nokubernetes-906000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-nokubernetes-906000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-nokubernetes-906000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-nokubernetes-906000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node nokubernetes-906000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node nokubernetes-906000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node nokubernetes-906000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000058] FS-Cache: O-key=[8] 'bd08550500000000'
	[  +0.000041] FS-Cache: N-cookie c=0000001f [p=00000017 fl=2 nc=0 na=1]
	[  +0.000059] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000ed077c58
	[  +0.000046] FS-Cache: N-key=[8] 'bd08550500000000'
	[  +0.001534] FS-Cache: Duplicate cookie detected
	[  +0.000033] FS-Cache: O-cookie c=00000019 [p=00000017 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=00000000d363e2eb
	[  +0.000061] FS-Cache: O-key=[8] 'bd08550500000000'
	[  +0.000048] FS-Cache: N-cookie c=00000020 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000061] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bd643532
	[  +0.000058] FS-Cache: N-key=[8] 'bd08550500000000'
	[  +2.600795] FS-Cache: Duplicate cookie detected
	[  +0.000045] FS-Cache: O-cookie c=0000001a [p=00000017 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=00000000b07f0146
	[  +0.000050] FS-Cache: O-key=[8] 'bc08550500000000'
	[  +0.000051] FS-Cache: N-cookie c=00000023 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000068] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bd643532
	[  +0.000070] FS-Cache: N-key=[8] 'bc08550500000000'
	[  +0.369338] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000001d [p=00000017 fl=226 nc=0 na=1]
	[  +0.000104] FS-Cache: O-cookie d=00000000eaa5deac{9p.inode} n=000000000e272457
	[  +0.000084] FS-Cache: O-key=[8] 'd808550500000000'
	[  +0.000058] FS-Cache: N-cookie c=00000024 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000035] FS-Cache: N-cookie d=00000000eaa5deac{9p.inode} n=00000000bae3f97e
	[  +0.000090] FS-Cache: N-key=[8] 'd808550500000000'
	
	* 
	* ==> etcd [5a4bb5fb8e33] <==
	* {"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2023-06-13T19:29:50.454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-06-13T19:29:50.455Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:nokubernetes-906000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-13T19:29:50.455Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:29:50.456Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-13T19:29:50.456Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:29:50.456Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:29:50.457Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-13T19:29:55.484Z","caller":"traceutil/trace.go:171","msg":"trace[1858582991] transaction","detail":"{read_only:false; response_revision:221; number_of_response:1; }","duration":"133.616921ms","start":"2023-06-13T19:29:55.351Z","end":"2023-06-13T19:29:55.484Z","steps":["trace[1858582991] 'process raft request'  (duration: 133.554511ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-13T19:29:55.484Z","caller":"traceutil/trace.go:171","msg":"trace[1108387282] transaction","detail":"{read_only:false; response_revision:220; number_of_response:1; }","duration":"161.957872ms","start":"2023-06-13T19:29:55.322Z","end":"2023-06-13T19:29:55.484Z","steps":["trace[1108387282] 'process raft request'  (duration: 96.769891ms)","trace[1108387282] 'compare'  (duration: 64.695812ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-13T19:29:55.485Z","caller":"traceutil/trace.go:171","msg":"trace[1331641829] linearizableReadLoop","detail":"{readStateIndex:226; appliedIndex:225; }","duration":"134.762627ms","start":"2023-06-13T19:29:55.350Z","end":"2023-06-13T19:29:55.485Z","steps":["trace[1331641829] 'read index received'  (duration: 68.63375ms)","trace[1331641829] 'applied index is now lower than readState.Index'  (duration: 66.12758ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-13T19:29:55.485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/nokubernetes-906000\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-13T19:29:55.485Z","caller":"traceutil/trace.go:171","msg":"trace[481004312] range","detail":"{range_begin:/registry/leases/kube-node-lease/nokubernetes-906000; range_end:; response_count:0; response_revision:221; }","duration":"166.445175ms","start":"2023-06-13T19:29:55.318Z","end":"2023-06-13T19:29:55.485Z","steps":["trace[481004312] 'agreement among raft nodes before linearized reading'  (duration: 166.258241ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-13T19:29:55.487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.848305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/nokubernetes-906000\" ","response":"range_response_count:1 size:697"}
	{"level":"info","ts":"2023-06-13T19:29:55.487Z","caller":"traceutil/trace.go:171","msg":"trace[1460057977] range","detail":"{range_begin:/registry/csinodes/nokubernetes-906000; range_end:; response_count:1; response_revision:221; }","duration":"134.91717ms","start":"2023-06-13T19:29:55.352Z","end":"2023-06-13T19:29:55.487Z","steps":["trace[1460057977] 'agreement among raft nodes before linearized reading'  (duration: 134.79163ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:30:00 up  2:28,  0 users,  load average: 0.82, 1.20, 1.06
	Linux NoKubernetes-906000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [df150bef03e3] <==
	* I0613 19:29:52.078044       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0613 19:29:52.078118       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0613 19:29:52.078216       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0613 19:29:52.078358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0613 19:29:52.078383       1 cache.go:39] Caches are synced for autoregister controller
	I0613 19:29:52.078465       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0613 19:29:52.078627       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0613 19:29:52.078634       1 shared_informer.go:318] Caches are synced for configmaps
	I0613 19:29:52.120689       1 controller.go:624] quota admission added evaluator for: namespaces
	E0613 19:29:52.124152       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0613 19:29:52.327918       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0613 19:29:52.764858       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0613 19:29:52.982277       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0613 19:29:52.985495       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0613 19:29:52.985507       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0613 19:29:53.438073       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0613 19:29:53.471889       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0613 19:29:53.543611       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0613 19:29:53.549068       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0613 19:29:53.549742       1 controller.go:624] quota admission added evaluator for: endpoints
	I0613 19:29:53.553773       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0613 19:29:54.045569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0613 19:29:55.526031       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0613 19:29:55.539774       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0613 19:29:55.550513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [d7e3ba574359] <==
	* I0613 19:29:50.723148       1 controllermanager.go:187] "Starting" version="v1.27.2"
	I0613 19:29:50.723190       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0613 19:29:50.724123       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0613 19:29:50.724214       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0613 19:29:50.724566       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0613 19:29:50.724619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0613 19:29:54.040923       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0613 19:29:54.048232       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
	I0613 19:29:54.048287       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0613 19:29:54.048293       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0613 19:29:54.048893       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I0613 19:29:54.048928       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0613 19:29:54.048944       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0613 19:29:54.049644       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0613 19:29:54.049677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0613 19:29:54.049683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0613 19:29:54.050358       1 controllermanager.go:638] "Started controller" controller="csrsigning"
	I0613 19:29:54.050497       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0613 19:29:54.050503       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0613 19:29:54.050514       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0613 19:29:54.056553       1 controllermanager.go:638] "Started controller" controller="tokencleaner"
	I0613 19:29:54.056697       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0613 19:29:54.056703       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0613 19:29:54.056708       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0613 19:29:54.141988       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [2a7bd988f16d] <==
	* W0613 19:29:52.879318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0613 19:29:52.879386       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0613 19:29:52.879991       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0613 19:29:52.880031       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0613 19:29:52.896561       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0613 19:29:52.896623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0613 19:29:52.978449       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0613 19:29:52.978537       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0613 19:29:53.047576       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0613 19:29:53.047625       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0613 19:29:53.067975       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0613 19:29:53.067997       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0613 19:29:53.190029       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0613 19:29:53.190074       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0613 19:29:53.220908       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0613 19:29:53.220977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0613 19:29:53.221078       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0613 19:29:53.221089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0613 19:29:53.221446       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0613 19:29:53.221552       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0613 19:29:53.222676       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0613 19:29:53.222694       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0613 19:29:53.228274       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0613 19:29:53.228315       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0613 19:29:56.041885       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.431574    2349 topology_manager.go:212] "Topology Admit Handler"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.493912    2349 kubelet_node_status.go:108] "Node was previously registered" node="nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.494031    2349 kubelet_node_status.go:73] "Successfully registered node" node="nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520480    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-kubeconfig\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520525    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a10d4300fad63297a582663deb4bd0d9-kubeconfig\") pod \"kube-scheduler-nokubernetes-906000\" (UID: \"a10d4300fad63297a582663deb4bd0d9\") " pod="kube-system/kube-scheduler-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520555    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ea616f6afadc53cbd75ef50ce75e9035-etcd-certs\") pod \"etcd-nokubernetes-906000\" (UID: \"ea616f6afadc53cbd75ef50ce75e9035\") " pod="kube-system/etcd-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520582    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ea616f6afadc53cbd75ef50ce75e9035-etcd-data\") pod \"etcd-nokubernetes-906000\" (UID: \"ea616f6afadc53cbd75ef50ce75e9035\") " pod="kube-system/etcd-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520621    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/320298ffd68d5c8810d9f590506bd1d6-usr-local-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-906000\" (UID: \"320298ffd68d5c8810d9f590506bd1d6\") " pod="kube-system/kube-apiserver-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520686    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/320298ffd68d5c8810d9f590506bd1d6-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-906000\" (UID: \"320298ffd68d5c8810d9f590506bd1d6\") " pod="kube-system/kube-apiserver-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520739    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/320298ffd68d5c8810d9f590506bd1d6-ca-certs\") pod \"kube-apiserver-nokubernetes-906000\" (UID: \"320298ffd68d5c8810d9f590506bd1d6\") " pod="kube-system/kube-apiserver-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520757    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-etc-ca-certificates\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520810    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-k8s-certs\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.520936    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-usr-local-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.521063    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.521193    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/320298ffd68d5c8810d9f590506bd1d6-etc-ca-certificates\") pod \"kube-apiserver-nokubernetes-906000\" (UID: \"320298ffd68d5c8810d9f590506bd1d6\") " pod="kube-system/kube-apiserver-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.521491    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/320298ffd68d5c8810d9f590506bd1d6-k8s-certs\") pod \"kube-apiserver-nokubernetes-906000\" (UID: \"320298ffd68d5c8810d9f590506bd1d6\") " pod="kube-system/kube-apiserver-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.521530    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-ca-certs\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:55 NoKubernetes-906000 kubelet[2349]: I0613 19:29:55.521615    2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea72dfa2da3e4b53b8ed63fac2fbdbfe-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-906000\" (UID: \"ea72dfa2da3e4b53b8ed63fac2fbdbfe\") " pod="kube-system/kube-controller-manager-nokubernetes-906000"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.272416    2349 apiserver.go:52] "Watching apiserver"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.289528    2349 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.328476    2349 reconciler.go:41] "Reconciler: start to sync state"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.527329    2349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-906000" podStartSLOduration=1.527289363 podCreationTimestamp="2023-06-13 19:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-13 19:29:56.451570558 +0000 UTC m=+1.266378325" watchObservedRunningTime="2023-06-13 19:29:56.527289363 +0000 UTC m=+1.342097122"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.535870    2349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-906000" podStartSLOduration=1.535841412 podCreationTimestamp="2023-06-13 19:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-13 19:29:56.52775128 +0000 UTC m=+1.342559047" watchObservedRunningTime="2023-06-13 19:29:56.535841412 +0000 UTC m=+1.350649172"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.535973    2349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-906000" podStartSLOduration=1.535958939 podCreationTimestamp="2023-06-13 19:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-13 19:29:56.535647274 +0000 UTC m=+1.350455041" watchObservedRunningTime="2023-06-13 19:29:56.535958939 +0000 UTC m=+1.350766699"
	Jun 13 19:29:56 NoKubernetes-906000 kubelet[2349]: I0613 19:29:56.551129    2349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-906000" podStartSLOduration=1.551102202 podCreationTimestamp="2023-06-13 19:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-13 19:29:56.543762587 +0000 UTC m=+1.358570344" watchObservedRunningTime="2023-06-13 19:29:56.551102202 +0000 UTC m=+1.365909969"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-906000 -n NoKubernetes-906000
helpers_test.go:261: (dbg) Run:  kubectl --context NoKubernetes-906000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestNoKubernetes/serial/StartWithK8s]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context NoKubernetes-906000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context NoKubernetes-906000 describe pod storage-provisioner: exit status 1 (58.474149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context NoKubernetes-906000 describe pod storage-provisioner: exit status 1
--- FAIL: TestNoKubernetes/serial/StartWithK8s (29.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (259.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m19.403639084s)

                                                
                                                
-- stdout --
	* [old-k8s-version-554000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-554000 in cluster old-k8s-version-554000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:39:22.826629   36348 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:39:22.826826   36348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:39:22.826832   36348 out.go:309] Setting ErrFile to fd 2...
	I0613 12:39:22.826836   36348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:39:22.826960   36348 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:39:22.829196   36348 out.go:303] Setting JSON to false
	I0613 12:39:22.860377   36348 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9533,"bootTime":1686675629,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:39:22.860493   36348 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:39:22.900656   36348 out.go:177] * [old-k8s-version-554000] minikube v1.30.1 on Darwin 13.4
	I0613 12:39:22.921531   36348 notify.go:220] Checking for updates...
	I0613 12:39:22.958627   36348 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:39:23.034713   36348 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:39:23.080772   36348 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:39:23.155670   36348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:39:23.213710   36348 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:39:23.255589   36348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:39:23.277288   36348 config.go:182] Loaded profile config "kubenet-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:39:23.277403   36348 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:39:23.340155   36348 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:39:23.340316   36348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:39:23.452424   36348 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:39:23.436083886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:39:23.476658   36348 out.go:177] * Using the docker driver based on user configuration
	I0613 12:39:23.497710   36348 start.go:297] selected driver: docker
	I0613 12:39:23.497733   36348 start.go:884] validating driver "docker" against <nil>
	I0613 12:39:23.497779   36348 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:39:23.501776   36348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:39:23.595704   36348 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:39:23.585377238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:39:23.595888   36348 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0613 12:39:23.596094   36348 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0613 12:39:23.617856   36348 out.go:177] * Using Docker Desktop driver with root privileges
	I0613 12:39:23.639363   36348 cni.go:84] Creating CNI manager for ""
	I0613 12:39:23.639389   36348 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:39:23.639401   36348 start_flags.go:319] config:
	{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:39:23.660421   36348 out.go:177] * Starting control plane node old-k8s-version-554000 in cluster old-k8s-version-554000
	I0613 12:39:23.681574   36348 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 12:39:23.703355   36348 out.go:177] * Pulling base image ...
	I0613 12:39:23.747529   36348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:39:23.747562   36348 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 12:39:23.747640   36348 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0613 12:39:23.747669   36348 cache.go:57] Caching tarball of preloaded images
	I0613 12:39:23.747880   36348 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 12:39:23.747898   36348 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0613 12:39:23.748992   36348 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/config.json ...
	I0613 12:39:23.749143   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/config.json: {Name:mk355b9596d33ea4a4b0d801dae1dde1e9cdeae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:23.798636   36348 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 12:39:23.798654   36348 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 12:39:23.798674   36348 cache.go:195] Successfully downloaded all kic artifacts
	I0613 12:39:23.798836   36348 start.go:365] acquiring machines lock for old-k8s-version-554000: {Name:mk0a9b1134645f4b38304ff0b8ed03f330d2f839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 12:39:23.799004   36348 start.go:369] acquired machines lock for "old-k8s-version-554000" in 156.441µs
	I0613 12:39:23.799035   36348 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 12:39:23.799109   36348 start.go:125] createHost starting for "" (driver="docker")
	I0613 12:39:23.820290   36348 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0613 12:39:23.820600   36348 start.go:159] libmachine.API.Create for "old-k8s-version-554000" (driver="docker")
	I0613 12:39:23.820638   36348 client.go:168] LocalClient.Create starting
	I0613 12:39:23.820771   36348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem
	I0613 12:39:23.820818   36348 main.go:141] libmachine: Decoding PEM data...
	I0613 12:39:23.820842   36348 main.go:141] libmachine: Parsing certificate...
	I0613 12:39:23.820937   36348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem
	I0613 12:39:23.820975   36348 main.go:141] libmachine: Decoding PEM data...
	I0613 12:39:23.820989   36348 main.go:141] libmachine: Parsing certificate...
	I0613 12:39:23.821582   36348 cli_runner.go:164] Run: docker network inspect old-k8s-version-554000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0613 12:39:23.871955   36348 cli_runner.go:211] docker network inspect old-k8s-version-554000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0613 12:39:23.872070   36348 network_create.go:281] running [docker network inspect old-k8s-version-554000] to gather additional debugging logs...
	I0613 12:39:23.872093   36348 cli_runner.go:164] Run: docker network inspect old-k8s-version-554000
	W0613 12:39:23.921673   36348 cli_runner.go:211] docker network inspect old-k8s-version-554000 returned with exit code 1
	I0613 12:39:23.921704   36348 network_create.go:284] error running [docker network inspect old-k8s-version-554000]: docker network inspect old-k8s-version-554000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-554000 not found
	I0613 12:39:23.921731   36348 network_create.go:286] output of [docker network inspect old-k8s-version-554000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-554000 not found
	
	** /stderr **
	I0613 12:39:23.921822   36348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0613 12:39:23.973056   36348 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0613 12:39:23.973397   36348 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00104efb0}
	I0613 12:39:23.973413   36348 network_create.go:123] attempt to create docker network old-k8s-version-554000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0613 12:39:23.973518   36348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-554000 old-k8s-version-554000
	W0613 12:39:24.022974   36348 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-554000 old-k8s-version-554000 returned with exit code 1
	W0613 12:39:24.023016   36348 network_create.go:148] failed to create docker network old-k8s-version-554000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-554000 old-k8s-version-554000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0613 12:39:24.023036   36348 network_create.go:115] failed to create docker network old-k8s-version-554000 192.168.58.0/24, will retry: subnet is taken
	I0613 12:39:24.024473   36348 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0613 12:39:24.024792   36348 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008b69d0}
	I0613 12:39:24.024808   36348 network_create.go:123] attempt to create docker network old-k8s-version-554000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0613 12:39:24.024874   36348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-554000 old-k8s-version-554000
	I0613 12:39:24.108269   36348 network_create.go:107] docker network old-k8s-version-554000 192.168.67.0/24 created
	I0613 12:39:24.108305   36348 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-554000" container
	I0613 12:39:24.108440   36348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0613 12:39:24.158070   36348 cli_runner.go:164] Run: docker volume create old-k8s-version-554000 --label name.minikube.sigs.k8s.io=old-k8s-version-554000 --label created_by.minikube.sigs.k8s.io=true
	I0613 12:39:24.207400   36348 oci.go:103] Successfully created a docker volume old-k8s-version-554000
	I0613 12:39:24.207525   36348 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-554000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-554000 --entrypoint /usr/bin/test -v old-k8s-version-554000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0613 12:39:24.644585   36348 oci.go:107] Successfully prepared a docker volume old-k8s-version-554000
	I0613 12:39:24.644626   36348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:39:24.644641   36348 kic.go:190] Starting extracting preloaded images to volume ...
	I0613 12:39:24.644756   36348 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-554000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0613 12:39:30.320468   36348 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-554000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.675755227s)
	I0613 12:39:30.320490   36348 kic.go:199] duration metric: took 5.675966 seconds to extract preloaded images to volume
	I0613 12:39:30.320620   36348 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0613 12:39:30.424685   36348 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-554000 --name old-k8s-version-554000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-554000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-554000 --network old-k8s-version-554000 --ip 192.168.67.2 --volume old-k8s-version-554000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0613 12:39:30.737732   36348 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Running}}
	I0613 12:39:30.804755   36348 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Status}}
	I0613 12:39:30.868500   36348 cli_runner.go:164] Run: docker exec old-k8s-version-554000 stat /var/lib/dpkg/alternatives/iptables
	I0613 12:39:30.985364   36348 oci.go:144] the created container "old-k8s-version-554000" has a running status.
	I0613 12:39:30.985442   36348 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa...
	I0613 12:39:31.026497   36348 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0613 12:39:31.096306   36348 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Status}}
	I0613 12:39:31.155965   36348 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0613 12:39:31.155997   36348 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-554000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0613 12:39:31.280450   36348 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Status}}
	I0613 12:39:31.346719   36348 machine.go:88] provisioning docker machine ...
	I0613 12:39:31.346765   36348 ubuntu.go:169] provisioning hostname "old-k8s-version-554000"
	I0613 12:39:31.346940   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:31.407111   36348 main.go:141] libmachine: Using SSH client type: native
	I0613 12:39:31.407514   36348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59436 <nil> <nil>}
	I0613 12:39:31.407529   36348 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-554000 && echo "old-k8s-version-554000" | sudo tee /etc/hostname
	I0613 12:39:31.548359   36348 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-554000
	
	I0613 12:39:31.548496   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:31.607671   36348 main.go:141] libmachine: Using SSH client type: native
	I0613 12:39:31.608101   36348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59436 <nil> <nil>}
	I0613 12:39:31.608117   36348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-554000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-554000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-554000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 12:39:31.732275   36348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:39:31.732307   36348 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 12:39:31.732330   36348 ubuntu.go:177] setting up certificates
	I0613 12:39:31.732347   36348 provision.go:83] configureAuth start
	I0613 12:39:31.732429   36348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:39:31.787037   36348 provision.go:138] copyHostCerts
	I0613 12:39:31.787140   36348 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 12:39:31.787150   36348 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 12:39:31.787251   36348 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 12:39:31.787451   36348 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 12:39:31.787458   36348 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 12:39:31.787520   36348 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 12:39:31.787695   36348 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 12:39:31.787702   36348 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 12:39:31.787761   36348 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 12:39:31.787913   36348 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-554000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-554000]
	I0613 12:39:31.893572   36348 provision.go:172] copyRemoteCerts
	I0613 12:39:31.893641   36348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 12:39:31.893702   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:31.988591   36348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59436 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:39:32.076565   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 12:39:32.100284   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0613 12:39:32.125905   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0613 12:39:32.152845   36348 provision.go:86] duration metric: configureAuth took 420.481679ms
	I0613 12:39:32.152863   36348 ubuntu.go:193] setting minikube options for container-runtime
	I0613 12:39:32.153016   36348 config.go:182] Loaded profile config "old-k8s-version-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0613 12:39:32.153103   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:32.206347   36348 main.go:141] libmachine: Using SSH client type: native
	I0613 12:39:32.206697   36348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59436 <nil> <nil>}
	I0613 12:39:32.206715   36348 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 12:39:32.328297   36348 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 12:39:32.328314   36348 ubuntu.go:71] root file system type: overlay
	I0613 12:39:32.328427   36348 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 12:39:32.328524   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:32.382048   36348 main.go:141] libmachine: Using SSH client type: native
	I0613 12:39:32.382388   36348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59436 <nil> <nil>}
	I0613 12:39:32.382436   36348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 12:39:32.510858   36348 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 12:39:32.510959   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:32.567366   36348 main.go:141] libmachine: Using SSH client type: native
	I0613 12:39:32.567722   36348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59436 <nil> <nil>}
	I0613 12:39:32.567736   36348 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 12:39:33.348977   36348 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-05-25 21:51:00.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-06-13 19:39:32.509025763 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0613 12:39:33.349002   36348 machine.go:91] provisioned docker machine in 2.002304747s
	I0613 12:39:33.349008   36348 client.go:171] LocalClient.Create took 9.528567044s
	I0613 12:39:33.349027   36348 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-554000" took 9.528628144s
	I0613 12:39:33.349038   36348 start.go:300] post-start starting for "old-k8s-version-554000" (driver="docker")
	I0613 12:39:33.349051   36348 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 12:39:33.349118   36348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 12:39:33.349185   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:33.402651   36348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59436 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:39:33.492440   36348 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 12:39:33.496916   36348 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 12:39:33.496937   36348 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 12:39:33.496946   36348 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 12:39:33.496953   36348 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 12:39:33.496963   36348 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 12:39:33.497044   36348 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 12:39:33.497226   36348 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 12:39:33.497421   36348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 12:39:33.506280   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:39:33.530350   36348 start.go:303] post-start completed in 181.301782ms
	I0613 12:39:33.530958   36348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:39:33.586238   36348 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/config.json ...
	I0613 12:39:33.586705   36348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:39:33.586775   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:33.643178   36348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59436 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:39:33.731125   36348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 12:39:33.736592   36348 start.go:128] duration metric: createHost completed in 9.937682562s
	I0613 12:39:33.736610   36348 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 9.937807347s
	I0613 12:39:33.736691   36348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:39:33.785941   36348 ssh_runner.go:195] Run: cat /version.json
	I0613 12:39:33.785962   36348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 12:39:33.786023   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:33.786040   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:33.839913   36348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59436 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:39:33.839972   36348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59436 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:39:34.027887   36348 ssh_runner.go:195] Run: systemctl --version
	I0613 12:39:34.033365   36348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 12:39:34.039021   36348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 12:39:34.062775   36348 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 12:39:34.062870   36348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0613 12:39:34.079354   36348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0613 12:39:34.095576   36348 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0613 12:39:34.095592   36348 start.go:464] detecting cgroup driver to use...
	I0613 12:39:34.095607   36348 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:39:34.095716   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:39:34.111561   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0613 12:39:34.121867   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 12:39:34.131973   36348 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 12:39:34.132037   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 12:39:34.142305   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:39:34.152495   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 12:39:34.162573   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:39:34.172501   36348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 12:39:34.182049   36348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 12:39:34.192244   36348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 12:39:34.201176   36348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 12:39:34.210287   36348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:39:34.281743   36348 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 12:39:34.362720   36348 start.go:464] detecting cgroup driver to use...
	I0613 12:39:34.362746   36348 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:39:34.362829   36348 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 12:39:34.375443   36348 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 12:39:34.375511   36348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 12:39:34.387345   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:39:34.404215   36348 ssh_runner.go:195] Run: which cri-dockerd
	I0613 12:39:34.411367   36348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 12:39:34.420739   36348 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 12:39:34.439551   36348 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 12:39:34.544183   36348 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 12:39:34.627384   36348 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 12:39:34.627399   36348 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 12:39:34.645759   36348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:39:34.722639   36348 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:39:34.975612   36348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:39:35.003641   36348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:39:35.092339   36348 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	I0613 12:39:35.092523   36348 cli_runner.go:164] Run: docker exec -t old-k8s-version-554000 dig +short host.docker.internal
	I0613 12:39:35.204380   36348 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 12:39:35.204498   36348 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 12:39:35.209617   36348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:39:35.221174   36348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:39:35.270473   36348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:39:35.270545   36348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:39:35.293584   36348 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:39:35.293598   36348 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:39:35.293668   36348 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:39:35.303257   36348 ssh_runner.go:195] Run: which lz4
	I0613 12:39:35.307505   36348 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0613 12:39:35.311933   36348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0613 12:39:35.311966   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0613 12:39:40.184826   36348 docker.go:600] Took 4.877511 seconds to copy over tarball
	I0613 12:39:40.184917   36348 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0613 12:39:42.715015   36348 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.530135815s)
	I0613 12:39:42.715030   36348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0613 12:39:42.788660   36348 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:39:42.798228   36348 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0613 12:39:42.814784   36348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:39:42.897252   36348 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:39:43.600943   36348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:39:43.623506   36348 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:39:43.623519   36348 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:39:43.623527   36348 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0613 12:39:43.629337   36348 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:39:43.631132   36348 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:39:43.631159   36348 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:39:43.631166   36348 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:39:43.631172   36348 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0613 12:39:43.631231   36348 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0613 12:39:43.631265   36348 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:39:43.631400   36348 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:39:43.634937   36348 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:39:43.637930   36348 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0613 12:39:43.639593   36348 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0613 12:39:43.639657   36348 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:39:43.639675   36348 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:39:43.639690   36348 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:39:43.639694   36348 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:39:43.639795   36348 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:39:44.857154   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:39:44.999755   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0613 12:39:45.023390   36348 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0613 12:39:45.023432   36348 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0613 12:39:45.023493   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0613 12:39:45.045916   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0613 12:39:45.230468   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0613 12:39:45.252778   36348 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0613 12:39:45.252803   36348 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0613 12:39:45.252860   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0613 12:39:45.276953   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0613 12:39:45.450635   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:39:45.473232   36348 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0613 12:39:45.473263   36348 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:39:45.473321   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:39:45.496014   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0613 12:39:45.511132   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:39:45.533826   36348 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0613 12:39:45.533851   36348 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:39:45.533907   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:39:45.556762   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0613 12:39:45.702153   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:39:45.725625   36348 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0613 12:39:45.725672   36348 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:39:45.725741   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:39:45.750069   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0613 12:39:46.008099   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:39:46.029723   36348 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0613 12:39:46.029754   36348 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:39:46.029822   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:39:46.049899   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0613 12:39:46.276060   36348 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0613 12:39:46.298166   36348 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0613 12:39:46.298197   36348 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:39:46.298263   36348 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0613 12:39:46.318376   36348 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0613 12:39:46.318425   36348 cache_images.go:92] LoadImages completed in 2.694948012s
	W0613 12:39:46.318476   36348 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0613 12:39:46.318559   36348 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 12:39:46.375068   36348 cni.go:84] Creating CNI manager for ""
	I0613 12:39:46.375083   36348 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:39:46.375099   36348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 12:39:46.375122   36348 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-554000 NodeName:old-k8s-version-554000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0613 12:39:46.375243   36348 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-554000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-554000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 12:39:46.375314   36348 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-554000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 12:39:46.375381   36348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0613 12:39:46.384874   36348 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 12:39:46.384938   36348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 12:39:46.393808   36348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0613 12:39:46.410159   36348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 12:39:46.427060   36348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0613 12:39:46.444344   36348 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0613 12:39:46.449026   36348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:39:46.460378   36348 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000 for IP: 192.168.67.2
	I0613 12:39:46.460397   36348 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.460575   36348 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 12:39:46.460632   36348 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 12:39:46.460677   36348 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.key
	I0613 12:39:46.460691   36348 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.crt with IP's: []
	I0613 12:39:46.543033   36348 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.crt ...
	I0613 12:39:46.543044   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.crt: {Name:mk6bd7faf5f189738ebcf9933816b46aecb99d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.543348   36348 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.key ...
	I0613 12:39:46.543356   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.key: {Name:mke7f89b16386bca991e115386bea32c40a66c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.543555   36348 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key.c7fa3a9e
	I0613 12:39:46.543568   36348 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0613 12:39:46.688163   36348 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt.c7fa3a9e ...
	I0613 12:39:46.688183   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt.c7fa3a9e: {Name:mka99eace4874076692c1fb514adbbcdefb826a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.688511   36348 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key.c7fa3a9e ...
	I0613 12:39:46.688521   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key.c7fa3a9e: {Name:mk9356f816c0be912da7d6566ebe44b5eb9b5e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.688706   36348 certs.go:337] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt
	I0613 12:39:46.688869   36348 certs.go:341] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key
	I0613 12:39:46.689023   36348 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key
	I0613 12:39:46.689036   36348 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.crt with IP's: []
	I0613 12:39:46.804974   36348 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.crt ...
	I0613 12:39:46.804986   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.crt: {Name:mk1dec45786b62344ec40209fb6e8abc40cfd2fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.805275   36348 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key ...
	I0613 12:39:46.805297   36348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key: {Name:mkf79aaad06d222399124c56c63315b6d13ee1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:39:46.805717   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 12:39:46.805821   36348 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 12:39:46.805846   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 12:39:46.805891   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 12:39:46.805932   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 12:39:46.805973   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 12:39:46.806056   36348 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:39:46.806729   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 12:39:46.830731   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0613 12:39:46.853010   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 12:39:46.875084   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0613 12:39:46.897065   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 12:39:46.919385   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 12:39:46.941920   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 12:39:46.963991   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 12:39:46.986199   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 12:39:47.008241   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 12:39:47.031230   36348 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 12:39:47.053615   36348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 12:39:47.070322   36348 ssh_runner.go:195] Run: openssl version
	I0613 12:39:47.076839   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 12:39:47.086578   36348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:39:47.091002   36348 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:39:47.091044   36348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:39:47.098042   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 12:39:47.108168   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 12:39:47.117962   36348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 12:39:47.122504   36348 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 12:39:47.122587   36348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 12:39:47.129825   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 12:39:47.139660   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 12:39:47.149254   36348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 12:39:47.154385   36348 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 12:39:47.154439   36348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 12:39:47.162305   36348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 12:39:47.173440   36348 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 12:39:47.178918   36348 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0613 12:39:47.178966   36348 kubeadm.go:404] StartCluster: {Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:39:47.179066   36348 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:39:47.204363   36348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 12:39:47.215794   36348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 12:39:47.226215   36348 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:39:47.226280   36348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:39:47.237663   36348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:39:47.237705   36348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:39:47.295148   36348 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:39:47.295208   36348 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:39:47.595791   36348 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:39:47.595903   36348 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:39:47.595989   36348 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:39:47.815537   36348 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:39:47.816441   36348 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:39:47.824293   36348 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:39:47.896220   36348 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:39:47.917637   36348 out.go:204]   - Generating certificates and keys ...
	I0613 12:39:47.917715   36348 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:39:47.917793   36348 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:39:48.147497   36348 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0613 12:39:48.255669   36348 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0613 12:39:48.371435   36348 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0613 12:39:48.606324   36348 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0613 12:39:48.740373   36348 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0613 12:39:48.741068   36348 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0613 12:39:48.831498   36348 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0613 12:39:48.831646   36348 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0613 12:39:49.029205   36348 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0613 12:39:49.149463   36348 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0613 12:39:49.252660   36348 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0613 12:39:49.252868   36348 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:39:49.376258   36348 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:39:49.451412   36348 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:39:49.567056   36348 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:39:49.717127   36348 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:39:49.717677   36348 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:39:49.739160   36348 out.go:204]   - Booting up control plane ...
	I0613 12:39:49.739250   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:39:49.739337   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:39:49.739411   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:39:49.739511   36348 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:39:49.739640   36348 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:40:29.725276   36348 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:40:29.725789   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:40:29.725953   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:40:34.726680   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:40:34.726904   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:40:44.726982   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:40:44.727436   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:41:04.727890   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:41:04.728071   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:41:44.729163   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:41:44.729383   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:41:44.729402   36348 kubeadm.go:322] 
	I0613 12:41:44.729460   36348 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:41:44.729519   36348 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:41:44.729528   36348 kubeadm.go:322] 
	I0613 12:41:44.729574   36348 kubeadm.go:322] This error is likely caused by:
	I0613 12:41:44.729642   36348 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:41:44.729813   36348 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:41:44.729828   36348 kubeadm.go:322] 
	I0613 12:41:44.729954   36348 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:41:44.730002   36348 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:41:44.730046   36348 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:41:44.730054   36348 kubeadm.go:322] 
	I0613 12:41:44.730178   36348 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:41:44.730305   36348 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:41:44.730397   36348 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:41:44.730457   36348 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:41:44.730519   36348 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:41:44.730542   36348 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:41:44.733820   36348 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:41:44.733896   36348 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:41:44.733999   36348 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:41:44.734094   36348 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:41:44.734173   36348 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:41:44.734249   36348 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0613 12:41:44.734363   36348 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-554000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0613 12:41:44.734393   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0613 12:41:45.153249   36348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:41:45.164615   36348 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:41:45.164676   36348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:41:45.173702   36348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:41:45.173727   36348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:41:45.225693   36348 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:41:45.225747   36348 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:41:45.476723   36348 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:41:45.476821   36348 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:41:45.476915   36348 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:41:45.661298   36348 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:41:45.662256   36348 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:41:45.668983   36348 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:41:45.735752   36348 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:41:45.777899   36348 out.go:204]   - Generating certificates and keys ...
	I0613 12:41:45.777990   36348 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:41:45.778062   36348 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:41:45.778154   36348 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0613 12:41:45.778227   36348 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0613 12:41:45.778298   36348 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0613 12:41:45.778344   36348 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0613 12:41:45.778439   36348 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0613 12:41:45.778496   36348 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0613 12:41:45.778562   36348 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0613 12:41:45.778624   36348 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0613 12:41:45.778665   36348 kubeadm.go:322] [certs] Using the existing "sa" key
	I0613 12:41:45.778719   36348 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:41:45.843851   36348 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:41:45.986609   36348 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:41:46.454797   36348 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:41:46.598554   36348 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:41:46.598922   36348 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:41:46.620280   36348 out.go:204]   - Booting up control plane ...
	I0613 12:41:46.620513   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:41:46.620688   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:41:46.620813   36348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:41:46.620987   36348 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:41:46.621316   36348 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:42:26.607770   36348 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:42:26.608860   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:42:26.609088   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:42:31.610925   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:42:31.611183   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:42:41.612271   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:42:41.612544   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:43:01.614843   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:43:01.615069   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:43:41.615800   36348 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:43:41.615993   36348 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:43:41.616019   36348 kubeadm.go:322] 
	I0613 12:43:41.616076   36348 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:43:41.616138   36348 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:43:41.616152   36348 kubeadm.go:322] 
	I0613 12:43:41.616188   36348 kubeadm.go:322] This error is likely caused by:
	I0613 12:43:41.616233   36348 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:43:41.616361   36348 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:43:41.616371   36348 kubeadm.go:322] 
	I0613 12:43:41.616520   36348 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:43:41.616561   36348 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:43:41.616596   36348 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:43:41.616605   36348 kubeadm.go:322] 
	I0613 12:43:41.616718   36348 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:43:41.616836   36348 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:43:41.616950   36348 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:43:41.617013   36348 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:43:41.617105   36348 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:43:41.617159   36348 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:43:41.620175   36348 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:43:41.620239   36348 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:43:41.620339   36348 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:43:41.620412   36348 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:43:41.620471   36348 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:43:41.620532   36348 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0613 12:43:41.620553   36348 kubeadm.go:406] StartCluster complete in 3m54.446543643s
	I0613 12:43:41.620661   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:43:41.642283   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.642299   36348 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:43:41.642372   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:43:41.665176   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.665191   36348 logs.go:286] No container was found matching "etcd"
	I0613 12:43:41.665268   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:43:41.685116   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.685130   36348 logs.go:286] No container was found matching "coredns"
	I0613 12:43:41.685206   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:43:41.714074   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.714088   36348 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:43:41.714157   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:43:41.737492   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.737516   36348 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:43:41.737637   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:43:41.758059   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.758073   36348 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:43:41.758140   36348 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:43:41.780095   36348 logs.go:284] 0 containers: []
	W0613 12:43:41.780108   36348 logs.go:286] No container was found matching "kindnet"
	I0613 12:43:41.780115   36348 logs.go:123] Gathering logs for kubelet ...
	I0613 12:43:41.780123   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:43:41.818743   36348 logs.go:123] Gathering logs for dmesg ...
	I0613 12:43:41.818756   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:43:41.833042   36348 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:43:41.833060   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:43:41.891708   36348 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:43:41.891723   36348 logs.go:123] Gathering logs for Docker ...
	I0613 12:43:41.891731   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:43:41.908742   36348 logs.go:123] Gathering logs for container status ...
	I0613 12:43:41.908756   36348 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0613 12:43:41.961114   36348 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0613 12:43:41.961136   36348 out.go:239] * 
	* 
	W0613 12:43:41.961175   36348 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:43:41.961190   36348 out.go:239] * 
	* 
	W0613 12:43:41.961804   36348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:43:42.024267   36348 out.go:177] 
	W0613 12:43:42.087625   36348 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:43:42.087694   36348 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0613 12:43:42.087723   36348 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0613 12:43:42.109571   36348 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:39:30.729047225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc5d1a801b8383a39d53ac00e429cf0a4ff856cf607fb13298c236e5594fd36",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59434"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccc5d1a801b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "2da931a3219cfcd17b0d7372b6b5662dc0f69e7d360759fd8c64b0cedd7cd9bc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 6 (357.786819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:43:42.612632   37507 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-554000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (259.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml: exit status 1 (35.525598ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-554000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:39:30.729047225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc5d1a801b8383a39d53ac00e429cf0a4ff856cf607fb13298c236e5594fd36",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59434"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccc5d1a801b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "2da931a3219cfcd17b0d7372b6b5662dc0f69e7d360759fd8c64b0cedd7cd9bc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 6 (352.654564ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:43:43.052501   37520 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-554000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:39:30.729047225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc5d1a801b8383a39d53ac00e429cf0a4ff856cf607fb13298c236e5594fd36",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59434"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccc5d1a801b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "2da931a3219cfcd17b0d7372b6b5662dc0f69e7d360759fd8c64b0cedd7cd9bc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
E0613 12:43:43.389011   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 6 (355.121713ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:43:43.458442   37534 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-554000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (71.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0613 12:43:43.546447   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:45.489334   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:43:48.310803   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.316733   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.326851   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.348997   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.389384   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.469520   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.629758   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:48.950055   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:49.591001   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:49.723574   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:43:50.871213   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:53.431335   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:43:58.551620   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:44:08.792644   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:44:15.511573   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:44:24.506046   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:44:27.326066   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.331546   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.343649   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.363892   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.404135   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.484605   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.644935   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:27.966078   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:28.607778   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:29.273201   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:44:29.889480   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:32.451668   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:37.571723   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:44:47.811822   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m11.000988442s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system: exit status 1 (36.008251ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-554000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:39:30.729047225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc5d1a801b8383a39d53ac00e429cf0a4ff856cf607fb13298c236e5594fd36",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59434"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccc5d1a801b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "2da931a3219cfcd17b0d7372b6b5662dc0f69e7d360759fd8c64b0cedd7cd9bc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 6 (353.811303ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:44:54.899247   37593 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-554000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (71.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0613 12:45:05.783875   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:45:07.407894   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:45:08.291526   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:45:10.232564   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:45:19.499716   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:45:33.465885   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:45:46.425304   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:45:49.251288   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:46:05.873684   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:46:25.304544   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 12:46:31.657100   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:46:32.150991   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:46:33.562012   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:46:38.815316   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:46:42.246405   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 12:46:46.276828   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
E0613 12:46:59.348346   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:47:11.169809   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:47:23.562921   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:47:51.246550   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:48:02.572135   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:48:15.648739   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:48:30.262527   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:48:48.304543   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:49:15.987707   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:49:27.319687   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:49:55.006635   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
E0613 12:50:05.777632   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:50:19.491452   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:51:06.042586   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m26.158624197s)

                                                
                                                
-- stdout --
	* [old-k8s-version-554000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-554000 in cluster old-k8s-version-554000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-554000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:44:56.789119   37621 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:44:56.789277   37621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:44:56.789282   37621 out.go:309] Setting ErrFile to fd 2...
	I0613 12:44:56.789286   37621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:44:56.789400   37621 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:44:56.790808   37621 out.go:303] Setting JSON to false
	I0613 12:44:56.810092   37621 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9867,"bootTime":1686675629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:44:56.810180   37621 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:44:56.832276   37621 out.go:177] * [old-k8s-version-554000] minikube v1.30.1 on Darwin 13.4
	I0613 12:44:56.876245   37621 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:44:56.876273   37621 notify.go:220] Checking for updates...
	I0613 12:44:56.920256   37621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:44:56.941401   37621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:44:56.963346   37621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:44:57.010762   37621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:44:57.054926   37621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:44:57.076318   37621 config.go:182] Loaded profile config "old-k8s-version-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0613 12:44:57.098731   37621 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0613 12:44:57.121867   37621 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:44:57.176358   37621 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:44:57.176480   37621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:44:57.272139   37621 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:44:57.262009476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:44:57.331695   37621 out.go:177] * Using the docker driver based on existing profile
	I0613 12:44:57.368499   37621 start.go:297] selected driver: docker
	I0613 12:44:57.368517   37621 start.go:884] validating driver "docker" against &{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:44:57.368640   37621 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:44:57.372625   37621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:44:57.465193   37621 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:44:57.4545639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path:
/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<ni
l>}}
	I0613 12:44:57.465410   37621 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0613 12:44:57.465432   37621 cni.go:84] Creating CNI manager for ""
	I0613 12:44:57.465443   37621 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:44:57.465459   37621 start_flags.go:319] config:
	{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:44:57.509185   37621 out.go:177] * Starting control plane node old-k8s-version-554000 in cluster old-k8s-version-554000
	I0613 12:44:57.530046   37621 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 12:44:57.550880   37621 out.go:177] * Pulling base image ...
	I0613 12:44:57.593159   37621 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:44:57.593159   37621 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 12:44:57.593268   37621 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0613 12:44:57.593290   37621 cache.go:57] Caching tarball of preloaded images
	I0613 12:44:57.593503   37621 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 12:44:57.593529   37621 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0613 12:44:57.594495   37621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/config.json ...
	I0613 12:44:57.643281   37621 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 12:44:57.643299   37621 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 12:44:57.643317   37621 cache.go:195] Successfully downloaded all kic artifacts
	I0613 12:44:57.643360   37621 start.go:365] acquiring machines lock for old-k8s-version-554000: {Name:mk0a9b1134645f4b38304ff0b8ed03f330d2f839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 12:44:57.643458   37621 start.go:369] acquired machines lock for "old-k8s-version-554000" in 79.593µs
	I0613 12:44:57.643490   37621 start.go:96] Skipping create...Using existing machine configuration
	I0613 12:44:57.643501   37621 fix.go:54] fixHost starting: 
	I0613 12:44:57.643752   37621 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Status}}
	I0613 12:44:57.693317   37621 fix.go:102] recreateIfNeeded on old-k8s-version-554000: state=Stopped err=<nil>
	W0613 12:44:57.693362   37621 fix.go:128] unexpected machine state, will restart: <nil>
	I0613 12:44:57.735659   37621 out.go:177] * Restarting existing docker container for "old-k8s-version-554000" ...
	I0613 12:44:57.756988   37621 cli_runner.go:164] Run: docker start old-k8s-version-554000
	I0613 12:44:58.008197   37621 cli_runner.go:164] Run: docker container inspect old-k8s-version-554000 --format={{.State.Status}}
	I0613 12:44:58.060693   37621 kic.go:426] container "old-k8s-version-554000" state is running.
	I0613 12:44:58.061298   37621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:44:58.115739   37621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/config.json ...
	I0613 12:44:58.116106   37621 machine.go:88] provisioning docker machine ...
	I0613 12:44:58.116129   37621 ubuntu.go:169] provisioning hostname "old-k8s-version-554000"
	I0613 12:44:58.116207   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:44:58.172514   37621 main.go:141] libmachine: Using SSH client type: native
	I0613 12:44:58.172908   37621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59652 <nil> <nil>}
	I0613 12:44:58.172920   37621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-554000 && echo "old-k8s-version-554000" | sudo tee /etc/hostname
	I0613 12:44:58.173923   37621 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0613 12:45:01.304546   37621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-554000
	
	I0613 12:45:01.304649   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:01.354216   37621 main.go:141] libmachine: Using SSH client type: native
	I0613 12:45:01.354557   37621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59652 <nil> <nil>}
	I0613 12:45:01.354571   37621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-554000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-554000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-554000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 12:45:01.472401   37621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:45:01.472421   37621 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 12:45:01.472442   37621 ubuntu.go:177] setting up certificates
	I0613 12:45:01.472450   37621 provision.go:83] configureAuth start
	I0613 12:45:01.472520   37621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:45:01.521921   37621 provision.go:138] copyHostCerts
	I0613 12:45:01.522017   37621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 12:45:01.522027   37621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 12:45:01.522158   37621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 12:45:01.522378   37621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 12:45:01.522384   37621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 12:45:01.522452   37621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 12:45:01.522615   37621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 12:45:01.522623   37621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 12:45:01.522731   37621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 12:45:01.522865   37621 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-554000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-554000]
	I0613 12:45:01.630829   37621 provision.go:172] copyRemoteCerts
	I0613 12:45:01.630894   37621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 12:45:01.630945   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:01.683252   37621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59652 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:45:01.771131   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0613 12:45:01.793381   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 12:45:01.835258   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0613 12:45:01.857153   37621 provision.go:86] duration metric: configureAuth took 384.69772ms
	I0613 12:45:01.857166   37621 ubuntu.go:193] setting minikube options for container-runtime
	I0613 12:45:01.857308   37621 config.go:182] Loaded profile config "old-k8s-version-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0613 12:45:01.857372   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:01.908122   37621 main.go:141] libmachine: Using SSH client type: native
	I0613 12:45:01.908464   37621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59652 <nil> <nil>}
	I0613 12:45:01.908475   37621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 12:45:02.027493   37621 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 12:45:02.027514   37621 ubuntu.go:71] root file system type: overlay
	I0613 12:45:02.027609   37621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 12:45:02.027712   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.078509   37621 main.go:141] libmachine: Using SSH client type: native
	I0613 12:45:02.078861   37621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59652 <nil> <nil>}
	I0613 12:45:02.078914   37621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 12:45:02.207419   37621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 12:45:02.207533   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.257968   37621 main.go:141] libmachine: Using SSH client type: native
	I0613 12:45:02.258352   37621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59652 <nil> <nil>}
	I0613 12:45:02.258368   37621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 12:45:02.383620   37621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:45:02.383636   37621 machine.go:91] provisioned docker machine in 4.267611758s
	I0613 12:45:02.383646   37621 start.go:300] post-start starting for "old-k8s-version-554000" (driver="docker")
	I0613 12:45:02.383656   37621 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 12:45:02.383719   37621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 12:45:02.383776   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.433498   37621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59652 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:45:02.521914   37621 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 12:45:02.526245   37621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 12:45:02.526272   37621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 12:45:02.526280   37621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 12:45:02.526287   37621 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 12:45:02.526296   37621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 12:45:02.526383   37621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 12:45:02.526548   37621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 12:45:02.526710   37621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 12:45:02.535546   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:45:02.558599   37621 start.go:303] post-start completed in 174.947669ms
	I0613 12:45:02.558702   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:45:02.558764   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.609152   37621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59652 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:45:02.695498   37621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 12:45:02.701033   37621 fix.go:56] fixHost completed within 5.057638031s
	I0613 12:45:02.701050   37621 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 5.057689973s
	I0613 12:45:02.701140   37621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-554000
	I0613 12:45:02.751058   37621 ssh_runner.go:195] Run: cat /version.json
	I0613 12:45:02.751073   37621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 12:45:02.751133   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.751166   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:02.803403   37621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59652 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:45:02.803701   37621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59652 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/old-k8s-version-554000/id_rsa Username:docker}
	I0613 12:45:02.988603   37621 ssh_runner.go:195] Run: systemctl --version
	I0613 12:45:02.994020   37621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0613 12:45:02.999295   37621 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0613 12:45:02.999352   37621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0613 12:45:03.008212   37621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0613 12:45:03.017311   37621 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0613 12:45:03.017326   37621 start.go:464] detecting cgroup driver to use...
	I0613 12:45:03.017342   37621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:45:03.017444   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:45:03.033499   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0613 12:45:03.043891   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 12:45:03.054782   37621 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 12:45:03.054850   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 12:45:03.065262   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:45:03.075361   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 12:45:03.085406   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:45:03.095948   37621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 12:45:03.105660   37621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 12:45:03.115812   37621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 12:45:03.124649   37621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 12:45:03.133567   37621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:45:03.211305   37621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 12:45:03.296294   37621 start.go:464] detecting cgroup driver to use...
	I0613 12:45:03.296327   37621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:45:03.296400   37621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 12:45:03.308561   37621 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 12:45:03.308642   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 12:45:03.321699   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:45:03.339189   37621 ssh_runner.go:195] Run: which cri-dockerd
	I0613 12:45:03.344081   37621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 12:45:03.354407   37621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 12:45:03.371726   37621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 12:45:03.475281   37621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 12:45:03.564997   37621 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 12:45:03.565013   37621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 12:45:03.582692   37621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:45:03.681382   37621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:45:03.933347   37621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:45:03.961550   37621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:45:04.013520   37621 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.2 ...
	I0613 12:45:04.013645   37621 cli_runner.go:164] Run: docker exec -t old-k8s-version-554000 dig +short host.docker.internal
	I0613 12:45:04.122381   37621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 12:45:04.122502   37621 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 12:45:04.127365   37621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:45:04.139761   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:04.189804   37621 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 12:45:04.189917   37621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:45:04.212841   37621 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:45:04.212857   37621 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:45:04.212921   37621 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:45:04.222410   37621 ssh_runner.go:195] Run: which lz4
	I0613 12:45:04.227153   37621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0613 12:45:04.231429   37621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0613 12:45:04.231454   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0613 12:45:09.177888   37621 docker.go:600] Took 4.950919 seconds to copy over tarball
	I0613 12:45:09.177966   37621 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0613 12:45:11.573238   37621 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.395305916s)
	I0613 12:45:11.573253   37621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0613 12:45:11.640792   37621 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0613 12:45:11.650224   37621 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0613 12:45:11.666904   37621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:45:11.738877   37621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:45:12.298513   37621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:45:12.320484   37621 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0613 12:45:12.320499   37621 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0613 12:45:12.320508   37621 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0613 12:45:12.326406   37621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:45:12.326490   37621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:45:12.326407   37621 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0613 12:45:12.326528   37621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:45:12.326718   37621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:45:12.327491   37621 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0613 12:45:12.327522   37621 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:45:12.327981   37621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:45:12.333013   37621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:45:12.334251   37621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:45:12.334598   37621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:45:12.334914   37621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:45:12.336093   37621 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0613 12:45:12.336254   37621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0613 12:45:12.336410   37621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:45:12.337364   37621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:45:13.462708   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:45:13.486406   37621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0613 12:45:13.486475   37621 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:45:13.486545   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0613 12:45:13.508592   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0613 12:45:13.764547   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 12:45:13.964562   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:45:13.987624   37621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0613 12:45:13.987655   37621 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:45:13.987713   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0613 12:45:14.010761   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0613 12:45:14.025456   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:45:14.051058   37621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0613 12:45:14.051090   37621 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:45:14.051146   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0613 12:45:14.074253   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0613 12:45:14.184659   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0613 12:45:14.208149   37621 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0613 12:45:14.208180   37621 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0613 12:45:14.208260   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0613 12:45:14.232021   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0613 12:45:14.448656   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0613 12:45:14.472652   37621 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0613 12:45:14.472682   37621 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0613 12:45:14.472745   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0613 12:45:14.497953   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0613 12:45:14.704831   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0613 12:45:14.728167   37621 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0613 12:45:14.728198   37621 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0613 12:45:14.728264   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0613 12:45:14.749145   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0613 12:45:15.031967   37621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:45:15.054662   37621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0613 12:45:15.054691   37621 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:45:15.054766   37621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0613 12:45:15.075277   37621 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0613 12:45:15.075336   37621 cache_images.go:92] LoadImages completed in 2.75487768s
	W0613 12:45:15.075397   37621 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0613 12:45:15.075478   37621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 12:45:15.127276   37621 cni.go:84] Creating CNI manager for ""
	I0613 12:45:15.127292   37621 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 12:45:15.127311   37621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 12:45:15.127327   37621 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-554000 NodeName:old-k8s-version-554000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0613 12:45:15.127445   37621 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-554000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-554000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 12:45:15.127521   37621 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-554000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 12:45:15.127581   37621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0613 12:45:15.137059   37621 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 12:45:15.137123   37621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 12:45:15.145982   37621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0613 12:45:15.162941   37621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 12:45:15.179770   37621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0613 12:45:15.196643   37621 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0613 12:45:15.201191   37621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:45:15.212443   37621 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000 for IP: 192.168.67.2
	I0613 12:45:15.212461   37621 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:45:15.212629   37621 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 12:45:15.212715   37621 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 12:45:15.212811   37621 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/client.key
	I0613 12:45:15.212894   37621 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key.c7fa3a9e
	I0613 12:45:15.212987   37621 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key
	I0613 12:45:15.213225   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 12:45:15.213271   37621 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 12:45:15.213283   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 12:45:15.213321   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 12:45:15.213359   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 12:45:15.213396   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 12:45:15.213477   37621 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:45:15.214089   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 12:45:15.237120   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0613 12:45:15.260290   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 12:45:15.283691   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/old-k8s-version-554000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0613 12:45:15.306306   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 12:45:15.328736   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 12:45:15.351102   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 12:45:15.373164   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 12:45:15.394959   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 12:45:15.417460   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 12:45:15.439894   37621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 12:45:15.463095   37621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 12:45:15.480265   37621 ssh_runner.go:195] Run: openssl version
	I0613 12:45:15.486575   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 12:45:15.496697   37621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 12:45:15.501276   37621 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 12:45:15.501323   37621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 12:45:15.508515   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 12:45:15.518214   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 12:45:15.528349   37621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 12:45:15.532698   37621 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 12:45:15.532740   37621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 12:45:15.539933   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 12:45:15.550395   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 12:45:15.560448   37621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:45:15.564865   37621 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:45:15.564915   37621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:45:15.572214   37621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 12:45:15.581620   37621 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 12:45:15.586068   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0613 12:45:15.593746   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0613 12:45:15.600935   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0613 12:45:15.608184   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0613 12:45:15.615581   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0613 12:45:15.622765   37621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0613 12:45:15.630210   37621 kubeadm.go:404] StartCluster: {Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:45:15.630329   37621 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:45:15.651953   37621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 12:45:15.661350   37621 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0613 12:45:15.661370   37621 kubeadm.go:636] restartCluster start
	I0613 12:45:15.661425   37621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0613 12:45:15.670325   37621 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:15.670405   37621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-554000
	I0613 12:45:15.721283   37621 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-554000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:45:15.721445   37621 kubeconfig.go:146] "old-k8s-version-554000" context is missing from /Users/jenkins/minikube-integration/15003-20351/kubeconfig - will repair!
	I0613 12:45:15.721790   37621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:45:15.723362   37621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0613 12:45:15.733321   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:15.733392   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:15.743720   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:16.243898   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:16.243959   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:16.255010   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:16.743862   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:16.744040   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:16.756571   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:17.243891   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:17.244047   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:17.255748   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:17.743933   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:17.744054   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:17.756753   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:18.243844   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:18.243932   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:18.255039   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:18.743726   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:18.743793   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:18.754822   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:19.243736   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:19.243853   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:19.255014   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:19.743972   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:19.744107   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:19.757064   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:20.245402   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:20.245609   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:20.257766   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:20.743809   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:20.743900   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:20.755249   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:21.244886   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:21.244972   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:21.256142   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:21.743928   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:21.744069   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:21.756505   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:22.243754   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:22.243822   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:22.254873   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:22.743820   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:22.743944   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:22.756228   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:23.243888   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:23.243994   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:23.254648   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:23.744108   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:23.744306   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:23.756771   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:24.244875   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:24.244961   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:24.256606   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:24.743769   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:24.743978   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:24.755969   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:25.244773   37621 api_server.go:166] Checking apiserver status ...
	I0613 12:45:25.244926   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:45:25.255821   37621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:45:25.733360   37621 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0613 12:45:25.733391   37621 kubeadm.go:1128] stopping kube-system containers ...
	I0613 12:45:25.733534   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:45:25.756117   37621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0613 12:45:25.768857   37621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:45:25.778033   37621 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jun 13 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jun 13 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jun 13 19:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Jun 13 19:41 /etc/kubernetes/scheduler.conf
	
	I0613 12:45:25.778095   37621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0613 12:45:25.787173   37621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0613 12:45:25.796356   37621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0613 12:45:25.805483   37621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0613 12:45:25.814446   37621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 12:45:25.823457   37621 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0613 12:45:25.823472   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:45:25.879812   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:45:26.417684   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:45:26.606773   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:45:26.670792   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0613 12:45:26.762993   37621 api_server.go:52] waiting for apiserver process to appear ...
	I0613 12:45:26.763070   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:27.273869   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:27.774068   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:28.275512   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:28.775759   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:29.274368   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:29.774395   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:30.274114   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:30.773814   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:31.273709   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:31.774004   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:32.273888   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:32.773625   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:33.273657   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:33.773536   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:34.274704   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:34.775594   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:35.274515   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:35.773676   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:36.273465   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:36.775533   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:37.273738   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:37.774591   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:38.274414   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:38.773422   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:39.273410   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:39.773466   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:40.273634   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:40.773937   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:41.273453   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:41.773349   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:42.275283   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:42.773462   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:43.273414   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:43.774188   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:44.273373   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:44.773667   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:45.273278   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:45.773585   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:46.273837   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:46.773283   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:47.274770   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:47.773363   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:48.273231   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:48.775301   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:49.274226   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:49.775392   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:50.274671   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:50.773913   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:51.274150   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:51.773334   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:52.273492   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:52.773226   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:53.273163   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:53.773240   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:54.273875   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:54.773151   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:55.273124   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:55.773142   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:56.273455   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:56.773052   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:57.273816   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:57.773276   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:58.274113   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:58.773783   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:59.273003   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:45:59.773931   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:00.273094   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:00.773376   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:01.273265   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:01.772890   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:02.275011   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:02.774876   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:03.273225   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:03.773009   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:04.273319   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:04.773429   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:05.273744   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:05.773162   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:06.274090   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:06.773153   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:07.272994   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:07.774938   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:08.272863   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:08.772865   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:09.272818   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:09.772832   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:10.272787   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:10.772853   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:11.272792   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:11.772691   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:12.272680   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:12.772775   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:13.272721   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:13.773467   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:14.272699   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:14.772761   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:15.272858   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:15.772633   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:16.272589   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:16.773229   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:17.273752   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:17.772760   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:18.273896   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:18.774105   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:19.274552   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:19.773965   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:20.274506   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:20.772580   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:21.272637   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:21.772502   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:22.273520   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:22.772642   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:23.274380   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:23.772568   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:24.272882   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:24.772730   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:25.272948   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:25.773748   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:26.274565   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:26.772478   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:26.794504   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.808713   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:26.808787   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:26.830320   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.830333   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:26.830415   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:26.851210   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.851221   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:26.851292   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:26.870490   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.870502   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:26.870577   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:26.890519   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.890532   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:26.890603   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:26.912051   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.912064   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:26.912135   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:26.933294   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.933311   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:26.933379   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:26.954012   37621 logs.go:284] 0 containers: []
	W0613 12:46:26.954025   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:26.954035   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:26.954046   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:26.994526   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:26.994542   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:27.008737   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:27.008751   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:27.066217   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:27.066235   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:27.066242   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:27.082204   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:27.082218   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:29.635208   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:29.646412   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:29.670062   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.670077   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:29.670157   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:29.692779   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.692793   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:29.692871   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:29.713879   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.713894   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:29.713969   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:29.735765   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.735779   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:29.735852   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:29.756877   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.756890   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:29.756968   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:29.779616   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.779637   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:29.779723   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:29.800960   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.800973   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:29.801058   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:29.822119   37621 logs.go:284] 0 containers: []
	W0613 12:46:29.822131   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:29.822139   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:29.822147   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:29.880838   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:29.880850   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:29.880857   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:29.896988   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:29.897000   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:29.951244   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:29.951258   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:29.990794   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:29.990810   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:32.506485   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:32.517929   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:32.538762   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.538775   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:32.538841   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:32.559455   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.559468   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:32.559538   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:32.579485   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.579497   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:32.579573   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:32.600269   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.600282   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:32.600356   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:32.620698   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.620713   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:32.620796   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:32.641446   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.641460   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:32.641530   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:32.662798   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.662812   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:32.662895   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:32.684908   37621 logs.go:284] 0 containers: []
	W0613 12:46:32.684922   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:32.684930   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:32.684938   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:32.734007   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:32.734029   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:32.748630   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:32.748648   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:32.808837   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:32.808850   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:32.808857   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:32.824561   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:32.824575   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:35.379725   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:35.390917   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:35.410849   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.410860   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:35.410936   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:35.431482   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.431496   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:35.431570   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:35.452654   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.452670   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:35.452744   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:35.473938   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.473952   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:35.474030   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:35.495634   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.495648   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:35.495723   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:35.516540   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.516554   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:35.516637   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:35.536681   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.536695   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:35.536773   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:35.557740   37621 logs.go:284] 0 containers: []
	W0613 12:46:35.557759   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:35.557767   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:35.557776   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:35.638268   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:35.638288   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:35.689517   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:35.689543   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:35.706261   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:35.706285   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:35.806460   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:35.806488   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:35.806499   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:38.342500   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:38.408689   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:38.442325   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.442349   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:38.442482   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:38.463442   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.463456   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:38.463542   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:38.484257   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.484270   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:38.484341   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:38.513034   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.513054   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:38.513145   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:38.540867   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.540898   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:38.541031   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:38.565418   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.565432   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:38.565511   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:38.584950   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.584963   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:38.585037   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:38.605076   37621 logs.go:284] 0 containers: []
	W0613 12:46:38.605099   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:38.605116   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:38.605128   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:38.656092   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:38.656114   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:38.672337   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:38.672354   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:38.758314   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:38.758337   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:38.758347   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:38.776717   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:38.776733   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:41.343097   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:41.354752   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:41.377136   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.377163   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:41.377238   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:41.404904   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.404921   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:41.405005   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:41.427617   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.427630   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:41.427701   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:41.448757   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.448772   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:41.448847   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:41.471987   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.472000   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:41.472077   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:41.497087   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.497101   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:41.497174   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:41.518577   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.518590   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:41.518669   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:41.541793   37621 logs.go:284] 0 containers: []
	W0613 12:46:41.541809   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:41.541817   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:41.541826   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:41.559390   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:41.559404   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:41.628532   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:41.628545   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:41.628556   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:41.650009   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:41.650026   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:41.711538   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:41.711554   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:44.258040   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:44.269390   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:44.289307   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.289320   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:44.289391   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:44.310314   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.310327   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:44.310407   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:44.332733   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.332748   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:44.332839   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:44.355330   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.355344   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:44.355421   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:44.377706   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.377726   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:44.377815   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:44.399087   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.399101   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:44.399174   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:44.420573   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.420589   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:44.420670   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:44.446166   37621 logs.go:284] 0 containers: []
	W0613 12:46:44.446180   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:44.446188   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:44.446197   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:44.461568   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:44.461585   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:44.522300   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:44.522321   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:44.522329   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:44.539473   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:44.539487   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:44.608340   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:44.608356   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:47.154127   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:47.165485   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:47.187824   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.187838   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:47.187905   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:47.207998   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.208015   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:47.208101   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:47.230812   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.230829   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:47.230922   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:47.252305   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.252320   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:47.252394   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:47.280767   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.280781   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:47.280923   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:47.301950   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.301964   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:47.302040   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:47.322529   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.322542   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:47.322616   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:47.344674   37621 logs.go:284] 0 containers: []
	W0613 12:46:47.344696   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:47.344708   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:47.344719   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:47.361756   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:47.361772   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:47.418357   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:47.418376   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:47.464007   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:47.464028   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:47.480865   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:47.480881   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:47.544482   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:50.046522   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:50.058555   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:50.079235   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.079252   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:50.079322   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:50.100974   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.100988   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:50.101057   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:50.123794   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.123808   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:50.123898   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:50.145189   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.145207   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:50.145290   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:50.167352   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.167365   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:50.167444   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:50.188975   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.188988   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:50.189062   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:50.209132   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.209147   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:50.209224   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:50.230510   37621 logs.go:284] 0 containers: []
	W0613 12:46:50.230525   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:50.230535   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:50.230543   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:50.277919   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:50.277935   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:50.293122   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:50.293136   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:50.359039   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:50.359060   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:50.359071   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:50.376215   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:50.376230   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:52.932683   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:52.944997   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:52.967405   37621 logs.go:284] 0 containers: []
	W0613 12:46:52.967425   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:52.967544   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:52.989890   37621 logs.go:284] 0 containers: []
	W0613 12:46:52.989903   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:52.990001   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:53.011003   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.011017   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:53.011101   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:53.033006   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.033019   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:53.033092   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:53.055604   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.055619   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:53.055699   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:53.077019   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.077035   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:53.077111   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:53.099127   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.099142   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:53.099226   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:53.123804   37621 logs.go:284] 0 containers: []
	W0613 12:46:53.123831   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:53.123842   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:53.123850   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:53.173228   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:53.173247   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:53.188531   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:53.188546   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:53.251345   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:53.251367   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:53.251376   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:53.269295   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:53.269311   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:55.829546   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:55.841284   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:55.861578   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.861594   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:55.861671   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:55.881848   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.881861   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:55.881930   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:55.902092   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.902108   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:55.902192   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:55.922740   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.922756   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:55.922848   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:55.945151   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.945168   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:55.945267   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:55.973533   37621 logs.go:284] 0 containers: []
	W0613 12:46:55.973560   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:55.973658   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:56.023861   37621 logs.go:284] 0 containers: []
	W0613 12:46:56.023881   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:56.023973   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:56.060958   37621 logs.go:284] 0 containers: []
	W0613 12:46:56.060986   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:56.060999   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:56.061013   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:56.115956   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:56.115984   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:56.134456   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:56.134482   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:56.204141   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:56.204157   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:56.204173   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:56.223321   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:56.223337   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:46:58.783519   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:46:58.796405   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:46:58.816932   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.816945   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:46:58.817014   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:46:58.838224   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.838237   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:46:58.838306   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:46:58.858369   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.858382   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:46:58.858474   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:46:58.880818   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.880830   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:46:58.880903   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:46:58.901508   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.901521   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:46:58.901595   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:46:58.923005   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.923019   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:46:58.923095   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:46:58.943637   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.943650   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:46:58.943738   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:46:58.965355   37621 logs.go:284] 0 containers: []
	W0613 12:46:58.965368   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:46:58.965376   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:46:58.965384   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:46:59.005983   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:46:59.005999   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:46:59.020265   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:46:59.020279   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:46:59.080339   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:46:59.080351   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:46:59.080359   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:46:59.096774   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:46:59.096788   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:01.653800   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:01.665663   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:01.687667   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.687680   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:01.687754   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:01.708646   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.708660   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:01.708736   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:01.728587   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.728599   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:01.728678   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:01.750332   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.750347   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:01.750433   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:01.771707   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.771720   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:01.771790   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:01.792784   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.808268   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:01.808363   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:01.830387   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.830402   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:01.830475   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:01.851083   37621 logs.go:284] 0 containers: []
	W0613 12:47:01.851103   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:01.851112   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:01.851123   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:01.865145   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:01.865159   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:01.923689   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:01.923707   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:01.923714   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:01.939694   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:01.939707   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:01.993565   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:01.993580   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:04.537406   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:04.549828   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:04.570028   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.570041   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:04.570110   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:04.591678   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.591691   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:04.591760   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:04.612314   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.612328   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:04.612393   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:04.634989   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.635002   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:04.635080   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:04.656346   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.656359   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:04.656436   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:04.678509   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.678523   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:04.678604   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:04.699313   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.699327   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:04.699405   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:04.722992   37621 logs.go:284] 0 containers: []
	W0613 12:47:04.723004   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:04.723012   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:04.723019   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:04.762945   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:04.762960   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:04.777878   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:04.777893   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:04.835204   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:04.835220   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:04.835227   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:04.851008   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:04.851022   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:07.406753   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:07.418146   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:07.438815   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.438831   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:07.438906   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:07.460690   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.460704   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:07.460775   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:07.480451   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.480465   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:07.480537   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:07.518677   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.518690   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:07.518766   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:07.540315   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.540328   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:07.540409   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:07.561000   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.561012   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:07.561087   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:07.581680   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.581693   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:07.581764   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:07.602653   37621 logs.go:284] 0 containers: []
	W0613 12:47:07.602666   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:07.602674   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:07.602682   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:07.662126   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:07.662139   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:07.662145   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:07.678141   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:07.678155   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:07.732889   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:07.732905   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:07.771138   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:07.771154   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:10.287846   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:10.300486   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:10.321457   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.321470   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:10.321544   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:10.343767   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.343781   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:10.343851   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:10.365042   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.365055   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:10.365125   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:10.385838   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.385851   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:10.385923   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:10.406732   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.406748   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:10.406819   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:10.427987   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.428000   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:10.428074   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:10.449694   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.449708   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:10.449779   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:10.470623   37621 logs.go:284] 0 containers: []
	W0613 12:47:10.470640   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:10.470649   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:10.470659   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:10.539871   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:10.539886   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:10.553687   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:10.553704   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:10.610532   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:10.610545   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:10.610554   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:10.626224   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:10.626241   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:13.183927   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:13.196712   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:13.217039   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.217052   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:13.217136   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:13.237345   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.237359   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:13.237430   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:13.258496   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.258509   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:13.258575   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:13.279254   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.279267   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:13.279362   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:13.299997   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.300010   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:13.300079   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:13.321080   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.321102   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:13.321182   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:13.341756   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.341768   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:13.341836   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:13.362727   37621 logs.go:284] 0 containers: []
	W0613 12:47:13.362741   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:13.362750   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:13.362758   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:13.376571   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:13.376587   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:13.436624   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:13.436646   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:13.436654   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:13.454124   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:13.454140   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:13.537913   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:13.537928   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:16.076707   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:16.089555   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:16.111004   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.111021   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:16.111106   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:16.131892   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.131906   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:16.131984   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:16.152443   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.152456   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:16.152533   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:16.173634   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.173646   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:16.173726   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:16.193773   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.193787   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:16.193857   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:16.214788   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.214802   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:16.214869   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:16.235722   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.235737   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:16.235810   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:16.258062   37621 logs.go:284] 0 containers: []
	W0613 12:47:16.258082   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:16.258090   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:16.258099   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:16.298010   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:16.298027   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:16.313011   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:16.313028   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:16.373536   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:16.373550   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:16.373557   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:16.389539   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:16.389555   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:18.947123   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:18.958303   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:18.979500   37621 logs.go:284] 0 containers: []
	W0613 12:47:18.979513   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:18.979582   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:19.000573   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.000587   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:19.000658   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:19.022542   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.022556   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:19.022624   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:19.043287   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.043309   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:19.043380   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:19.064143   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.064158   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:19.064229   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:19.085578   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.085592   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:19.085661   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:19.107083   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.107097   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:19.107165   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:19.127460   37621 logs.go:284] 0 containers: []
	W0613 12:47:19.127474   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:19.127482   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:19.127490   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:19.168440   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:19.168455   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:19.183591   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:19.183619   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:19.243966   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:19.243980   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:19.243987   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:19.260186   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:19.260200   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:21.816470   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:21.829060   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:21.849564   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.849576   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:21.849646   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:21.869898   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.869912   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:21.869999   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:21.890451   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.890464   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:21.890534   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:21.911661   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.911673   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:21.911741   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:21.933279   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.933291   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:21.933354   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:21.953754   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.953766   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:21.953836   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:21.974887   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.974900   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:21.974978   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:21.996272   37621 logs.go:284] 0 containers: []
	W0613 12:47:21.996284   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:21.996299   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:21.996307   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:22.049736   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:22.049751   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:22.089510   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:22.089523   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:22.104505   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:22.104519   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:22.166126   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:22.166139   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:22.166146   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:24.683156   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:24.694643   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:24.716221   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.716234   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:24.716313   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:24.738620   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.738633   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:24.738710   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:24.760567   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.760580   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:24.760649   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:24.782016   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.782030   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:24.782098   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:24.802014   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.802027   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:24.802099   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:24.823554   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.823567   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:24.823642   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:24.845537   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.845550   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:24.845621   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:24.866788   37621 logs.go:284] 0 containers: []
	W0613 12:47:24.866800   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:24.866807   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:24.866815   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:24.909716   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:24.909733   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:24.924204   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:24.924218   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:24.983736   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:24.983750   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:24.983757   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:24.999889   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:24.999903   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:27.556704   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:27.569431   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:27.590127   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.590140   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:27.590209   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:27.611425   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.611438   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:27.611508   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:27.632453   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.632466   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:27.632537   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:27.653554   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.653568   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:27.653649   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:27.675348   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.675366   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:27.675448   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:27.710700   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.710722   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:27.710851   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:27.732746   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.732760   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:27.732839   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:27.753944   37621 logs.go:284] 0 containers: []
	W0613 12:47:27.753957   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:27.753964   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:27.753973   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:27.793310   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:27.793324   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:27.807387   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:27.807401   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:27.864919   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:27.864934   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:27.864944   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:27.881020   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:27.881035   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:30.439330   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:30.452155   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:30.472669   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.472682   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:30.472766   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:30.493748   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.493763   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:30.493833   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:30.514416   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.514428   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:30.514495   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:30.535582   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.535595   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:30.535668   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:30.557009   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.557022   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:30.557092   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:30.577087   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.577099   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:30.577167   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:30.597884   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.597897   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:30.597967   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:30.618534   37621 logs.go:284] 0 containers: []
	W0613 12:47:30.618548   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:30.618555   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:30.618564   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:30.678380   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:30.678392   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:30.678400   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:30.705283   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:30.705299   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:30.762872   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:30.762887   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:30.801782   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:30.801796   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:33.316582   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:33.329377   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:33.349466   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.349480   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:33.349551   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:33.370759   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.370772   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:33.370840   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:33.390861   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.390873   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:33.390946   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:33.412436   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.412449   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:33.412522   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:33.433308   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.433320   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:33.433400   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:33.454454   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.454467   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:33.454536   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:33.474737   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.474750   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:33.474817   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:33.495915   37621 logs.go:284] 0 containers: []
	W0613 12:47:33.495929   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:33.495936   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:33.495944   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:33.536629   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:33.536645   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:33.550913   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:33.550927   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:33.608495   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:33.608507   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:33.608514   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:33.625103   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:33.625124   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:36.184073   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:36.196910   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:36.218017   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.218030   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:36.218107   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:36.239545   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.239560   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:36.239640   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:36.261691   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.261704   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:36.261773   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:36.283078   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.283090   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:36.283185   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:36.305130   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.305143   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:36.305212   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:36.326412   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.326425   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:36.326518   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:36.348058   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.348071   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:36.348139   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:36.368928   37621 logs.go:284] 0 containers: []
	W0613 12:47:36.368940   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:36.368948   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:36.368957   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:36.427472   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:36.427493   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:36.427502   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:36.443329   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:36.443343   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:36.498599   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:36.498614   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:36.539524   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:36.539541   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:39.054509   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:39.066590   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:39.087250   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.087262   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:39.087338   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:39.107950   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.107963   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:39.108059   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:39.129402   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.129415   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:39.129488   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:39.150604   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.150618   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:39.150688   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:39.171676   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.171689   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:39.171758   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:39.192344   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.192357   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:39.192438   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:39.214195   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.214208   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:39.214275   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:39.235248   37621 logs.go:284] 0 containers: []
	W0613 12:47:39.235261   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:39.235269   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:39.235278   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:39.276140   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:39.276162   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:39.290557   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:39.290574   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:39.349873   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:39.349891   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:39.349912   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:39.366376   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:39.366389   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:41.920606   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:41.932126   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:41.953278   37621 logs.go:284] 0 containers: []
	W0613 12:47:41.953292   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:41.953369   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:41.975499   37621 logs.go:284] 0 containers: []
	W0613 12:47:41.975518   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:41.975596   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:42.014504   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.014518   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:42.014594   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:42.035466   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.035479   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:42.035573   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:42.056277   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.056290   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:42.056358   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:42.076104   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.076118   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:42.076189   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:42.098071   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.098084   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:42.098157   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:42.118902   37621 logs.go:284] 0 containers: []
	W0613 12:47:42.118914   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:42.118921   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:42.118929   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:42.159835   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:42.159851   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:42.174263   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:42.174280   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:42.232505   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:42.232519   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:42.232526   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:42.248506   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:42.248518   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:44.803748   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:44.816392   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:44.836810   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.836823   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:44.836891   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:44.858294   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.858308   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:44.858378   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:44.879915   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.879927   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:44.879997   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:44.901782   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.901798   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:44.901881   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:44.923008   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.923023   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:44.923094   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:44.944961   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.944973   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:44.945046   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:44.965957   37621 logs.go:284] 0 containers: []
	W0613 12:47:44.965971   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:44.966043   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:45.017814   37621 logs.go:284] 0 containers: []
	W0613 12:47:45.017829   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:45.017837   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:45.017845   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:45.060342   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:45.060360   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:45.074781   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:45.074796   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:45.133177   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:45.133191   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:45.133198   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:45.149651   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:45.149666   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:47.706770   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:47.719776   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:47.739991   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.740005   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:47.740072   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:47.760978   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.760992   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:47.761061   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:47.781485   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.781498   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:47.781567   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:47.802792   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.802804   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:47.802874   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:47.823161   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.823179   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:47.823248   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:47.844012   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.844025   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:47.844094   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:47.866062   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.866074   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:47.866144   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:47.887656   37621 logs.go:284] 0 containers: []
	W0613 12:47:47.887669   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:47.887677   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:47.887685   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:47.903803   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:47.903817   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:47.961404   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:47.961425   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:48.029640   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:48.029655   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:48.044167   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:48.044182   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:48.102922   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:50.603492   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:50.616195   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:50.636575   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.636591   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:50.636672   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:50.657287   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.657301   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:50.657380   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:50.677922   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.677936   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:50.678007   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:50.698743   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.698760   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:50.698836   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:50.720110   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.720124   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:50.720197   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:50.741592   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.741605   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:50.741681   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:50.761829   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.761842   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:50.761911   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:50.783759   37621 logs.go:284] 0 containers: []
	W0613 12:47:50.783772   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:50.783780   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:50.783788   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:50.797634   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:50.797652   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:50.856243   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:50.856256   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:50.856263   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:50.872371   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:50.872384   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:50.927496   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:50.927513   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:53.470059   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:53.482800   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:53.503017   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.503032   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:53.503107   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:53.524153   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.524168   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:53.524241   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:53.544762   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.544774   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:53.544840   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:53.565125   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.565140   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:53.565208   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:53.586910   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.586924   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:53.586998   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:53.607086   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.607099   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:53.607172   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:53.628077   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.628090   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:53.628158   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:53.649879   37621 logs.go:284] 0 containers: []
	W0613 12:47:53.649892   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:53.649899   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:53.649908   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:53.664064   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:53.664079   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:53.723339   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:53.723356   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:53.723363   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:53.739009   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:53.739022   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:53.794364   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:53.794381   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:56.336296   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:56.349130   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:56.369628   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.369641   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:56.369712   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:56.391109   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.391122   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:56.391197   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:56.412576   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.412590   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:56.412659   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:56.433240   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.433254   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:56.433323   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:56.453283   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.453295   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:56.453365   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:56.474178   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.474199   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:56.474278   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:56.494357   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.494371   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:56.494443   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:56.515137   37621 logs.go:284] 0 containers: []
	W0613 12:47:56.515150   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:56.515158   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:56.515166   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:47:56.570413   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:56.570429   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:56.610208   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:56.610222   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:56.624501   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:56.624515   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:56.680959   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:56.680973   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:56.680981   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:59.197050   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:47:59.208269   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:47:59.229513   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.229530   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:47:59.229609   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:47:59.251324   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.251336   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:47:59.251400   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:47:59.272418   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.272435   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:47:59.272500   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:47:59.294304   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.294316   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:47:59.294385   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:47:59.314581   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.314595   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:47:59.314666   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:47:59.335491   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.335505   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:47:59.335574   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:47:59.356833   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.356846   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:47:59.356912   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:47:59.378146   37621 logs.go:284] 0 containers: []
	W0613 12:47:59.378162   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:47:59.378174   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:47:59.378196   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:47:59.420068   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:47:59.420082   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:47:59.434171   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:47:59.434190   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:47:59.492765   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:47:59.492777   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:47:59.492784   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:47:59.509020   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:47:59.509034   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:02.065310   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:02.077931   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:02.099112   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.099125   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:02.099195   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:02.119978   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.119991   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:02.120060   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:02.141688   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.141703   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:02.141780   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:02.162163   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.162176   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:02.162244   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:02.182539   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.182556   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:02.182627   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:02.213041   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.213054   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:02.213128   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:02.235367   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.235381   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:02.235453   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:02.256836   37621 logs.go:284] 0 containers: []
	W0613 12:48:02.256850   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:02.256857   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:02.256865   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:02.297300   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:02.297315   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:02.311584   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:02.311599   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:02.370484   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:02.370499   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:02.370506   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:02.388981   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:02.389003   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:04.950576   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:04.963424   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:04.982970   37621 logs.go:284] 0 containers: []
	W0613 12:48:04.982984   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:04.983057   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:05.003946   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.003960   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:05.004031   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:05.025601   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.025615   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:05.025684   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:05.046072   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.046084   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:05.046151   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:05.068304   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.068320   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:05.068387   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:05.088473   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.088486   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:05.088553   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:05.109787   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.109800   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:05.109874   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:05.131630   37621 logs.go:284] 0 containers: []
	W0613 12:48:05.131643   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:05.131650   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:05.131658   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:05.148217   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:05.148231   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:05.214853   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:05.214869   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:05.255909   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:05.255927   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:05.270380   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:05.270399   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:05.329168   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:07.831022   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:07.844049   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:07.865122   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.865135   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:07.865204   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:07.885641   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.885654   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:07.885722   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:07.905781   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.905795   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:07.905864   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:07.927677   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.927691   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:07.927759   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:07.948223   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.948236   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:07.948299   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:07.969125   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.969139   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:07.969209   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:07.991664   37621 logs.go:284] 0 containers: []
	W0613 12:48:07.991677   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:07.991743   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:08.014439   37621 logs.go:284] 0 containers: []
	W0613 12:48:08.014453   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:08.014460   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:08.014467   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:08.054372   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:08.054385   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:08.069286   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:08.069301   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:08.127325   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:08.127337   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:08.127345   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:08.144681   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:08.144698   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:10.711432   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:10.724477   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:10.744709   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.744723   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:10.744788   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:10.765587   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.765601   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:10.765674   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:10.786426   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.786441   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:10.786506   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:10.807520   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.807533   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:10.807603   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:10.827695   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.827708   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:10.827776   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:10.848704   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.848717   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:10.848785   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:10.870247   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.870262   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:10.870329   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:10.890690   37621 logs.go:284] 0 containers: []
	W0613 12:48:10.890704   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:10.890711   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:10.890719   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:10.931471   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:10.931487   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:10.946180   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:10.946192   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:11.004187   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:11.004199   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:11.004206   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:11.019952   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:11.019964   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:13.575520   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:13.587817   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:13.607395   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.607407   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:13.607477   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:13.627887   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.627899   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:13.627968   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:13.650676   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.650689   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:13.650761   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:13.670921   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.670934   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:13.671003   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:13.691306   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.691319   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:13.691390   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:13.712738   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.712752   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:13.712822   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:13.733382   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.733395   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:13.733465   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:13.754959   37621 logs.go:284] 0 containers: []
	W0613 12:48:13.754974   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:13.754981   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:13.754989   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:13.770763   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:13.770777   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:13.825095   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:13.825109   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:13.865184   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:13.865197   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:13.879685   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:13.879700   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:13.938129   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:16.438203   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:16.449423   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:16.470361   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.470377   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:16.470450   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:16.490713   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.490727   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:16.490808   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:16.512365   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.512379   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:16.512450   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:16.533934   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.533948   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:16.534014   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:16.554446   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.554460   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:16.554529   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:16.575693   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.575706   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:16.575775   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:16.595883   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.595897   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:16.595965   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:16.616155   37621 logs.go:284] 0 containers: []
	W0613 12:48:16.616170   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:16.616177   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:16.616184   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:16.632220   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:16.632233   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:16.687865   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:16.687882   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:16.727308   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:16.727325   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:16.741525   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:16.741540   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:16.799605   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:19.308748   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:19.321286   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:19.342855   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.342869   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:19.342939   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:19.363228   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.363241   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:19.363314   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:19.384584   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.384598   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:19.384668   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:19.407733   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.407771   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:19.407847   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:19.431143   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.431158   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:19.431231   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:19.452380   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.452397   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:19.452484   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:19.472822   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.472835   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:19.472906   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:19.517282   37621 logs.go:284] 0 containers: []
	W0613 12:48:19.517296   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:19.517303   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:19.517311   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:19.533712   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:19.533727   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:19.589388   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:19.589404   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:19.629265   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:19.629280   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:19.643803   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:19.643818   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:19.701652   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:22.202022   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:22.213848   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:22.234779   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.234792   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:22.234860   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:22.255036   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.255050   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:22.255120   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:22.275678   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.275696   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:22.275768   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:22.296692   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.296707   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:22.296776   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:22.318399   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.318419   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:22.318507   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:22.340034   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.340050   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:22.340124   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:22.360515   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.360528   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:22.360595   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:22.380560   37621 logs.go:284] 0 containers: []
	W0613 12:48:22.380573   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:22.380581   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:22.380588   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:22.396698   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:22.396714   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:22.454783   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:22.454801   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:22.526951   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:22.526971   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:22.541418   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:22.541432   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:22.598870   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:25.099882   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:25.111685   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:25.131996   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.132008   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:25.132080   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:25.153567   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.153580   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:25.153664   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:25.174534   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.174546   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:25.174619   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:25.196381   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.196394   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:25.196465   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:25.217102   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.217115   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:25.217184   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:25.239089   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.239102   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:25.239173   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:25.260009   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.260023   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:25.260098   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:25.281143   37621 logs.go:284] 0 containers: []
	W0613 12:48:25.281158   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:25.281165   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:25.281172   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:25.319730   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:25.319744   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:25.334289   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:25.334303   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:25.393159   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:25.393172   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:25.393179   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:25.410783   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:25.410798   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:27.969948   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:27.982154   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:28.002418   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.002431   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:28.002503   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:28.023947   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.023960   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:28.024034   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:28.045214   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.045227   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:28.045296   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:28.066197   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.066211   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:28.066282   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:28.086512   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.086527   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:28.086615   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:28.107638   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.107651   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:28.107724   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:28.128218   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.128233   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:28.128300   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:28.149025   37621 logs.go:284] 0 containers: []
	W0613 12:48:28.149038   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:28.149046   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:28.149053   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:28.207108   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:28.207127   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:28.207134   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:28.223160   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:28.223174   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:28.277846   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:28.277860   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:28.317592   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:28.317608   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:30.833142   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:30.845855   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:30.866680   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.866693   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:30.866767   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:30.887205   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.887218   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:30.887283   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:30.907770   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.907784   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:30.907854   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:30.927887   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.927901   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:30.927972   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:30.947564   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.947578   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:30.947648   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:30.967909   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.967923   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:30.967994   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:30.989141   37621 logs.go:284] 0 containers: []
	W0613 12:48:30.989154   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:30.989224   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:31.009542   37621 logs.go:284] 0 containers: []
	W0613 12:48:31.009555   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:31.009562   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:31.009570   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:31.049986   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:31.050019   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:31.065000   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:31.065015   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:31.123984   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:31.123997   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:31.124004   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:31.140400   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:31.140414   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:33.694855   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:33.706169   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:33.726343   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.726358   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:33.726453   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:33.747688   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.747701   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:33.747771   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:33.769359   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.769372   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:33.769451   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:33.790146   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.790160   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:33.790229   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:33.811143   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.811157   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:33.811230   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:33.832530   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.832543   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:33.832626   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:33.852544   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.852557   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:33.852623   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:33.874219   37621 logs.go:284] 0 containers: []
	W0613 12:48:33.874231   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:33.874239   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:33.874246   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:33.890146   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:33.890160   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:33.944339   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:33.944353   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:33.983704   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:33.983719   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:33.998106   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:33.998120   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:34.055293   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:36.555773   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:36.570250   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:36.592185   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.592200   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:36.592271   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:36.612978   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.612992   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:36.613062   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:36.633698   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.633720   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:36.633825   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:36.655653   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.655667   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:36.655737   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:36.677120   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.677135   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:36.677204   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:36.704982   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.704997   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:36.705066   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:36.726979   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.726994   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:36.727063   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:36.748771   37621 logs.go:284] 0 containers: []
	W0613 12:48:36.748785   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:36.748794   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:36.748801   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:36.789967   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:36.806228   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:36.820565   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:36.820581   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:36.880092   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:36.880107   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:36.880115   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:36.896252   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:36.896264   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:39.449641   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:39.460801   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:39.480773   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.480787   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:39.480857   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:39.501937   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.501951   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:39.502010   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:39.523632   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.523644   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:39.523716   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:39.545355   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.545369   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:39.545436   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:39.566405   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.566418   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:39.566482   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:39.586935   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.586947   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:39.587017   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:39.607633   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.607647   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:39.607723   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:39.628748   37621 logs.go:284] 0 containers: []
	W0613 12:48:39.628762   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:39.628770   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:39.628780   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:39.642996   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:39.643012   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:39.711898   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:39.711914   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:39.711923   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:39.728710   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:39.728725   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:39.784107   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:39.784123   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:42.323206   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:42.334212   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:42.353718   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.353731   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:42.353801   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:42.374379   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.374393   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:42.374461   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:42.395198   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.395213   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:42.395302   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:42.416210   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.416229   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:42.416306   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:42.436332   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.436346   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:42.436416   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:42.457638   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.457652   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:42.457723   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:42.478438   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.478451   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:42.478522   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:42.498886   37621 logs.go:284] 0 containers: []
	W0613 12:48:42.498899   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:42.498908   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:42.498916   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:42.561445   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:42.561458   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:42.561464   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:42.577293   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:42.577305   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:42.632269   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:42.632286   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:42.674077   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:42.674096   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:45.191893   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:45.204891   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:45.226245   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.226258   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:45.226325   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:45.246942   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.246956   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:45.247026   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:45.268208   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.268223   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:45.268297   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:45.289589   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.289603   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:45.289669   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:45.310336   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.310349   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:45.310419   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:45.330633   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.330647   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:45.330716   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:45.351072   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.351086   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:45.351155   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:45.372270   37621 logs.go:284] 0 containers: []
	W0613 12:48:45.372283   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:45.372291   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:45.372302   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:45.388361   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:45.388374   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:45.443931   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:45.443946   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:45.484027   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:45.484042   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:45.498179   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:45.498194   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:45.557443   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:48.058186   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:48.070834   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:48.091176   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.091191   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:48.091273   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:48.112603   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.112616   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:48.112685   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:48.133244   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.133260   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:48.133335   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:48.155486   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.155500   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:48.155568   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:48.175461   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.175474   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:48.175547   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:48.195692   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.195707   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:48.195777   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:48.216442   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.216455   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:48.216524   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:48.237516   37621 logs.go:284] 0 containers: []
	W0613 12:48:48.237531   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:48.237539   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:48.237546   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:48.292619   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:48.292636   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:48.333568   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:48.333584   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:48.347768   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:48.347784   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:48.406376   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:48.406391   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:48.406400   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:50.922509   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:50.934287   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:50.955383   37621 logs.go:284] 0 containers: []
	W0613 12:48:50.955397   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:50.955467   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:50.976788   37621 logs.go:284] 0 containers: []
	W0613 12:48:50.976802   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:50.976879   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:51.015222   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.015236   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:51.015306   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:51.036167   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.036181   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:51.036253   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:51.057539   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.057553   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:51.057625   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:51.077961   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.077975   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:51.078045   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:51.098848   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.098863   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:51.098930   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:51.120322   37621 logs.go:284] 0 containers: []
	W0613 12:48:51.120336   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:51.120344   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:51.120351   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:51.160691   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:51.160707   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:51.174991   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:51.175006   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:51.233099   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:51.233112   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:51.233119   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:51.248968   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:51.248980   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:53.806391   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:53.819473   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:53.839796   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.839808   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:53.839875   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:53.861213   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.861227   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:53.861295   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:53.883379   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.883392   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:53.883459   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:53.903819   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.903834   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:53.903902   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:53.926531   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.926553   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:53.926638   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:53.948018   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.948034   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:53.948119   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:53.968817   37621 logs.go:284] 0 containers: []
	W0613 12:48:53.968832   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:53.968901   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:54.017439   37621 logs.go:284] 0 containers: []
	W0613 12:48:54.017453   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:54.017461   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:54.017468   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:54.031225   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:54.031239   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:54.088840   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:54.088853   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:54.088860   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:54.105049   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:54.105063   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:54.158580   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:54.158595   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:56.698131   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:56.710907   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:56.731135   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.731147   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:56.731214   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:56.752482   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.752494   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:56.752565   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:56.773423   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.773436   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:56.773506   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:56.793994   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.805506   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:56.805591   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:56.826816   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.826829   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:56.826897   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:56.848277   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.848290   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:56.848359   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:56.869972   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.869985   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:56.870054   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:56.890314   37621 logs.go:284] 0 containers: []
	W0613 12:48:56.890328   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:56.890336   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:56.890344   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:56.933251   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:56.933269   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:56.948344   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:56.948362   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:57.016650   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:57.016662   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:57.016668   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:57.032852   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:57.032883   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:48:59.587281   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:48:59.599028   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:48:59.620223   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.620237   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:48:59.620313   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:48:59.641579   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.641592   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:48:59.641666   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:48:59.662317   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.662332   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:48:59.662404   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:48:59.682846   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.682864   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:48:59.682941   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:48:59.704619   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.704633   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:48:59.704702   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:48:59.725044   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.725057   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:48:59.725131   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:48:59.745974   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.745988   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:48:59.746059   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:48:59.766262   37621 logs.go:284] 0 containers: []
	W0613 12:48:59.766276   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:48:59.766283   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:48:59.766291   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:48:59.805830   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:48:59.805845   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:48:59.819734   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:48:59.819750   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:48:59.877872   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:48:59.877890   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:48:59.877900   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:48:59.894775   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:48:59.894791   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:02.451468   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:02.463502   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:02.484218   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.484233   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:02.484301   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:02.506111   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.506124   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:02.506191   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:02.526215   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.526230   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:02.526301   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:02.547011   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.547025   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:02.547095   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:02.567318   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.567331   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:02.567397   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:02.589084   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.589097   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:02.589165   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:02.609676   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.609690   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:02.609761   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:02.630530   37621 logs.go:284] 0 containers: []
	W0613 12:49:02.630542   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:02.630550   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:02.630557   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:02.645260   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:02.645274   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:02.705650   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:02.705664   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:02.705671   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:02.722015   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:02.722029   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:02.779051   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:02.779088   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:05.322898   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:05.334151   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:05.355430   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.355444   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:05.355513   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:05.377021   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.377034   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:05.377103   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:05.398475   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.398488   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:05.398559   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:05.418619   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.418631   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:05.418700   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:05.439649   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.439662   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:05.439752   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:05.461250   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.461264   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:05.461334   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:05.481418   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.481431   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:05.481507   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:05.502572   37621 logs.go:284] 0 containers: []
	W0613 12:49:05.502586   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:05.502594   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:05.502601   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:05.542967   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:05.542981   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:05.557335   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:05.557351   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:05.615744   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:05.615771   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:05.615780   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:05.632067   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:05.632083   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:08.187908   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:08.199195   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:08.221321   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.221333   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:08.221403   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:08.243014   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.243027   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:08.243102   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:08.263516   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.263529   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:08.263598   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:08.284720   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.284733   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:08.284803   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:08.304999   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.305012   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:08.305084   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:08.326745   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.326758   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:08.326829   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:08.348512   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.348527   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:08.348601   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:08.369661   37621 logs.go:284] 0 containers: []
	W0613 12:49:08.369679   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:08.369688   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:08.369698   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:08.424376   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:08.424394   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:08.466341   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:08.466356   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:08.480390   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:08.480406   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:08.538352   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:08.538365   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:08.538373   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:11.056339   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:11.069028   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:11.090628   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.090650   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:11.090729   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:11.110789   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.110803   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:11.110879   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:11.132291   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.132305   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:11.132375   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:11.153862   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.153875   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:11.153955   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:11.174970   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.174985   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:11.175058   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:11.207800   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.207835   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:11.207927   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:11.228972   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.228984   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:11.229059   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:11.250310   37621 logs.go:284] 0 containers: []
	W0613 12:49:11.250323   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:11.250331   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:11.250339   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:11.304395   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:11.304411   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:11.344604   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:11.344621   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:11.359063   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:11.359078   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:11.417619   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:11.417632   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:11.417639   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:13.935495   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:13.948475   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:13.969051   37621 logs.go:284] 0 containers: []
	W0613 12:49:13.969066   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:13.969133   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:13.990121   37621 logs.go:284] 0 containers: []
	W0613 12:49:13.990135   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:13.990206   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:14.010207   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.010220   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:14.010295   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:14.030836   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.030850   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:14.030918   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:14.051500   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.051514   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:14.051584   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:14.072799   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.072811   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:14.072880   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:14.093831   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.093844   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:14.093914   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:14.115382   37621 logs.go:284] 0 containers: []
	W0613 12:49:14.115400   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:14.115408   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:14.115422   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:14.129280   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:14.129294   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:14.191102   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:14.191115   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:14.191123   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:14.208082   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:14.208098   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:14.266155   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:14.266169   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:16.810174   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:16.822398   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:16.843219   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.843231   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:16.843297   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:16.864244   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.864256   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:16.864327   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:16.884660   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.884672   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:16.884739   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:16.905343   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.905356   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:16.905429   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:16.926059   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.926072   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:16.926144   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:16.948484   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.948500   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:16.948581   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:16.969338   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.969351   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:16.969423   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:16.990128   37621 logs.go:284] 0 containers: []
	W0613 12:49:16.990141   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:16.990149   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:16.990156   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:17.028836   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:17.028849   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:17.042882   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:17.042907   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:17.099851   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:17.099863   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:17.099871   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:17.115996   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:17.116010   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:19.672997   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:19.685866   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:19.706790   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.706804   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:19.706873   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:19.729421   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.729441   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:19.729519   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:19.750742   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.750754   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:19.750826   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:19.773560   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.773578   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:19.773681   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:19.795082   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.795095   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:19.795163   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:19.816101   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.816115   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:19.816190   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:19.837253   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.837265   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:19.837333   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:19.860646   37621 logs.go:284] 0 containers: []
	W0613 12:49:19.860659   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:19.860667   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:19.860676   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:19.915588   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:19.915604   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:19.957126   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:19.957144   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:19.971901   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:19.971915   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:20.030227   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:20.030241   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:20.030248   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:22.546819   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:22.559621   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:22.580316   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.580330   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:22.580411   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:22.601440   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.601455   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:22.601530   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:22.621927   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.621942   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:22.622016   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:22.644678   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.644694   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:22.644765   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:22.665245   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.665257   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:22.665330   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:22.687815   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.687828   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:22.687900   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:22.708642   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.708656   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:22.708723   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:22.729917   37621 logs.go:284] 0 containers: []
	W0613 12:49:22.729934   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:22.729944   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:22.729953   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:22.746248   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:22.746262   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:22.800548   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:22.800564   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:22.842840   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:22.842859   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:22.857374   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:22.857391   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:22.915980   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:25.416678   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:25.427978   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:49:25.448767   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.448780   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:49:25.448849   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:49:25.469313   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.469333   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:49:25.469415   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:49:25.508816   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.508829   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:49:25.508901   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:49:25.529765   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.529778   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:49:25.529850   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:49:25.550852   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.550865   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:49:25.550932   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:49:25.571132   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.571145   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:49:25.571216   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:49:25.591390   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.591404   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:49:25.591475   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:49:25.611399   37621 logs.go:284] 0 containers: []
	W0613 12:49:25.611413   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:49:25.611421   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:49:25.611429   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:49:25.651610   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:49:25.651624   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0613 12:49:25.665792   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:49:25.665808   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:49:25.722687   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:49:25.722700   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:49:25.722711   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:49:25.738659   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:49:25.738674   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:49:28.295498   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:49:28.308092   37621 kubeadm.go:640] restartCluster took 4m12.652043891s
	W0613 12:49:28.308136   37621 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0613 12:49:28.308151   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0613 12:49:28.723336   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:49:28.735038   37621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 12:49:28.744327   37621 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:49:28.744388   37621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:49:28.753426   37621 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:49:28.753458   37621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:49:28.805011   37621 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:49:28.805061   37621 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:49:29.055471   37621 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:49:29.055564   37621 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:49:29.055649   37621 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:49:29.242249   37621 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:49:29.242968   37621 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:49:29.249693   37621 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:49:29.321329   37621 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:49:29.342765   37621 out.go:204]   - Generating certificates and keys ...
	I0613 12:49:29.342852   37621 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:49:29.342907   37621 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:49:29.342983   37621 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0613 12:49:29.343040   37621 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0613 12:49:29.343123   37621 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0613 12:49:29.343191   37621 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0613 12:49:29.343273   37621 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0613 12:49:29.343345   37621 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0613 12:49:29.343434   37621 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0613 12:49:29.343500   37621 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0613 12:49:29.343542   37621 kubeadm.go:322] [certs] Using the existing "sa" key
	I0613 12:49:29.343586   37621 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:49:29.530367   37621 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:49:29.663569   37621 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:49:30.116582   37621 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:49:30.268460   37621 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:49:30.269157   37621 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:49:30.290715   37621 out.go:204]   - Booting up control plane ...
	I0613 12:49:30.290962   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:49:30.291100   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:49:30.291243   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:49:30.291385   37621 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:49:30.291681   37621 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:50:10.277181   37621 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:50:10.277645   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:50:10.277804   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:50:15.279310   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:50:15.279531   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:50:25.453131   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:50:25.453369   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:50:45.454742   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:50:45.454993   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:51:25.456693   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:51:25.456860   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:51:25.456871   37621 kubeadm.go:322] 
	I0613 12:51:25.456902   37621 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:51:25.456926   37621 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:51:25.456932   37621 kubeadm.go:322] 
	I0613 12:51:25.456957   37621 kubeadm.go:322] This error is likely caused by:
	I0613 12:51:25.456983   37621 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:51:25.457071   37621 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:51:25.457082   37621 kubeadm.go:322] 
	I0613 12:51:25.457157   37621 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:51:25.457219   37621 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:51:25.457246   37621 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:51:25.457252   37621 kubeadm.go:322] 
	I0613 12:51:25.457336   37621 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:51:25.457413   37621 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:51:25.457481   37621 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:51:25.457523   37621 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:51:25.457581   37621 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:51:25.457605   37621 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:51:25.461207   37621 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:51:25.461304   37621 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:51:25.461448   37621 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:51:25.461548   37621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:51:25.461623   37621 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:51:25.461686   37621 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0613 12:51:25.461827   37621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0613 12:51:25.461894   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0613 12:51:25.877875   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:51:25.890000   37621 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0613 12:51:25.890081   37621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 12:51:25.900117   37621 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0613 12:51:25.900139   37621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0613 12:51:25.953827   37621 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0613 12:51:25.953872   37621 kubeadm.go:322] [preflight] Running pre-flight checks
	I0613 12:51:26.253950   37621 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0613 12:51:26.254040   37621 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0613 12:51:26.254169   37621 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0613 12:51:26.480723   37621 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0613 12:51:26.481970   37621 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0613 12:51:26.491946   37621 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0613 12:51:26.589126   37621 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0613 12:51:26.611401   37621 out.go:204]   - Generating certificates and keys ...
	I0613 12:51:26.611590   37621 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0613 12:51:26.611700   37621 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0613 12:51:26.611789   37621 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0613 12:51:26.611867   37621 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0613 12:51:26.612099   37621 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0613 12:51:26.612204   37621 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0613 12:51:26.612298   37621 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0613 12:51:26.612394   37621 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0613 12:51:26.612495   37621 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0613 12:51:26.612633   37621 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0613 12:51:26.612682   37621 kubeadm.go:322] [certs] Using the existing "sa" key
	I0613 12:51:26.612776   37621 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0613 12:51:27.115330   37621 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0613 12:51:27.210187   37621 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0613 12:51:27.266747   37621 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0613 12:51:27.482848   37621 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0613 12:51:27.484329   37621 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0613 12:51:27.529208   37621 out.go:204]   - Booting up control plane ...
	I0613 12:51:27.529359   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0613 12:51:27.529559   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0613 12:51:27.529620   37621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0613 12:51:27.529705   37621 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0613 12:51:27.529889   37621 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0613 12:52:07.494169   37621 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0613 12:52:07.496781   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:52:07.497014   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:52:12.497586   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:52:12.497751   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:52:22.498884   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:52:22.499036   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:52:42.500145   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:52:42.500332   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:53:22.503497   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:53:22.503722   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:53:22.503738   37621 kubeadm.go:322] 
	I0613 12:53:22.503780   37621 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:53:22.503817   37621 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:53:22.503821   37621 kubeadm.go:322] 
	I0613 12:53:22.503894   37621 kubeadm.go:322] This error is likely caused by:
	I0613 12:53:22.503936   37621 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:53:22.504054   37621 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:53:22.504064   37621 kubeadm.go:322] 
	I0613 12:53:22.504194   37621 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:53:22.504255   37621 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:53:22.504295   37621 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:53:22.504301   37621 kubeadm.go:322] 
	I0613 12:53:22.504412   37621 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:53:22.504513   37621 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:53:22.504607   37621 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:53:22.504668   37621 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:53:22.504758   37621 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:53:22.504783   37621 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:53:22.507330   37621 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:53:22.507416   37621 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:53:22.507527   37621 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:53:22.507616   37621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:53:22.507698   37621 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:53:22.507759   37621 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0613 12:53:22.507784   37621 kubeadm.go:406] StartCluster complete in 8m6.705838356s
	I0613 12:53:22.507876   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:53:22.529672   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.529686   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:53:22.529762   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:53:22.551294   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.551308   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:53:22.551391   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:53:22.572714   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.572726   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:53:22.572797   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:53:22.594327   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.594344   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:53:22.594421   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:53:22.615769   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.615783   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:53:22.615855   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:53:22.636233   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.636248   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:53:22.636320   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:53:22.656691   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.656705   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:53:22.656773   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:53:22.678048   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.678062   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:53:22.678071   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:53:22.678079   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:53:22.737026   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:53:22.737040   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:53:22.737048   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:53:22.752861   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:53:22.752873   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:53:22.805722   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:53:22.805737   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:53:22.846675   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:53:22.846689   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0613 12:53:22.861057   37621 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0613 12:53:22.861083   37621 out.go:239] * 
	* 
	W0613 12:53:22.861138   37621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:53:22.861156   37621 out.go:239] * 
	* 
	W0613 12:53:22.861804   37621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:53:22.927794   37621 out.go:177] 
	W0613 12:53:22.970591   37621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:53:22.970661   37621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0613 12:53:22.970692   37621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0613 12:53:23.014693   37621 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:44:58.001656032Z",
	            "FinishedAt": "2023-06-13T19:44:55.288857813Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91eea7ce06a736bcebc6e16ec019e29531e35edc0efa8dd27d1bdcf8954dcd78",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59652"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59653"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59654"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59655"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59656"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/91eea7ce06a7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "522ec1a96f4f34c0dab581c1f3f60535ac037d9119dddf68b5c26fab103cb29c",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (354.964401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25: (1.34308222s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-185000 sudo                                 | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:39 PDT | 13 Jun 23 12:39 PDT |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-185000 sudo                                 | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:39 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-185000 sudo                                 | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:39 PDT | 13 Jun 23 12:40 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-185000 sudo find                            | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:40 PDT | 13 Jun 23 12:40 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-185000 sudo crio                            | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:40 PDT | 13 Jun 23 12:40 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-185000                                      | kubenet-185000         | jenkins | v1.30.1 | 13 Jun 23 12:40 PDT | 13 Jun 23 12:40 PDT |
	| start   | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:40 PDT | 13 Jun 23 12:41 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-874000             | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:41 PDT | 13 Jun 23 12:41 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:41 PDT | 13 Jun 23 12:41 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-874000                  | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:41 PDT | 13 Jun 23 12:41 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:41 PDT | 13 Jun 23 12:51 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-554000        | old-k8s-version-554000 | jenkins | v1.30.1 | 13 Jun 23 12:43 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-554000                              | old-k8s-version-554000 | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT | 13 Jun 23 12:44 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-554000             | old-k8s-version-554000 | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT | 13 Jun 23 12:44 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-554000                              | old-k8s-version-554000 | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-874000 sudo                              | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	| delete  | -p no-preload-874000                                   | no-preload-874000      | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	| start   | -p embed-certs-550000                                  | embed-certs-550000     | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:52 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-550000            | embed-certs-550000     | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-550000                                  | embed-certs-550000     | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-550000                 | embed-certs-550000     | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-550000                                  | embed-certs-550000     | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 12:53:13
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 12:53:13.349136   38298 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:53:13.349289   38298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:53:13.349296   38298 out.go:309] Setting ErrFile to fd 2...
	I0613 12:53:13.349300   38298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:53:13.349416   38298 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:53:13.350853   38298 out.go:303] Setting JSON to false
	I0613 12:53:13.371884   38298 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10364,"bootTime":1686675629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 12:53:13.371973   38298 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 12:53:13.394564   38298 out.go:177] * [embed-certs-550000] minikube v1.30.1 on Darwin 13.4
	I0613 12:53:13.437055   38298 notify.go:220] Checking for updates...
	I0613 12:53:13.458752   38298 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 12:53:13.479850   38298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:53:13.501058   38298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 12:53:13.522793   38298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 12:53:13.544076   38298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 12:53:13.566026   38298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 12:53:13.589036   38298 config.go:182] Loaded profile config "embed-certs-550000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:53:13.589415   38298 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 12:53:13.644175   38298 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 12:53:13.644287   38298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:53:13.738266   38298 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:53:13.727558564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:53:13.759983   38298 out.go:177] * Using the docker driver based on existing profile
	I0613 12:53:13.780899   38298 start.go:297] selected driver: docker
	I0613 12:53:13.780924   38298 start.go:884] validating driver "docker" against &{Name:embed-certs-550000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-550000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:53:13.781043   38298 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 12:53:13.785092   38298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 12:53:13.881540   38298 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 19:53:13.870440476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 12:53:13.881783   38298 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0613 12:53:13.881807   38298 cni.go:84] Creating CNI manager for ""
	I0613 12:53:13.881820   38298 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:53:13.881835   38298 start_flags.go:319] config:
	{Name:embed-certs-550000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-550000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:53:13.903667   38298 out.go:177] * Starting control plane node embed-certs-550000 in cluster embed-certs-550000
	I0613 12:53:13.925403   38298 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 12:53:13.946366   38298 out.go:177] * Pulling base image ...
	I0613 12:53:13.988549   38298 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 12:53:13.988644   38298 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 12:53:13.988650   38298 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 12:53:13.988671   38298 cache.go:57] Caching tarball of preloaded images
	I0613 12:53:13.988862   38298 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 12:53:13.988883   38298 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0613 12:53:13.989689   38298 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/config.json ...
	I0613 12:53:14.042250   38298 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 12:53:14.042271   38298 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 12:53:14.042294   38298 cache.go:195] Successfully downloaded all kic artifacts
	I0613 12:53:14.042331   38298 start.go:365] acquiring machines lock for embed-certs-550000: {Name:mkd3f32e8038a33c20afd4d854b10c8096f04960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 12:53:14.042415   38298 start.go:369] acquired machines lock for "embed-certs-550000" in 63.101µs
	I0613 12:53:14.042440   38298 start.go:96] Skipping create...Using existing machine configuration
	I0613 12:53:14.042453   38298 fix.go:54] fixHost starting: 
	I0613 12:53:14.042712   38298 cli_runner.go:164] Run: docker container inspect embed-certs-550000 --format={{.State.Status}}
	I0613 12:53:14.093207   38298 fix.go:102] recreateIfNeeded on embed-certs-550000: state=Stopped err=<nil>
	W0613 12:53:14.093253   38298 fix.go:128] unexpected machine state, will restart: <nil>
	I0613 12:53:14.115206   38298 out.go:177] * Restarting existing docker container for "embed-certs-550000" ...
	I0613 12:53:14.156901   38298 cli_runner.go:164] Run: docker start embed-certs-550000
	I0613 12:53:14.403510   38298 cli_runner.go:164] Run: docker container inspect embed-certs-550000 --format={{.State.Status}}
	I0613 12:53:14.456977   38298 kic.go:426] container "embed-certs-550000" state is running.
	I0613 12:53:14.457602   38298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-550000
	I0613 12:53:14.514222   38298 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/config.json ...
	I0613 12:53:14.514624   38298 machine.go:88] provisioning docker machine ...
	I0613 12:53:14.514650   38298 ubuntu.go:169] provisioning hostname "embed-certs-550000"
	I0613 12:53:14.514726   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:14.575631   38298 main.go:141] libmachine: Using SSH client type: native
	I0613 12:53:14.576097   38298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59779 <nil> <nil>}
	I0613 12:53:14.576115   38298 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-550000 && echo "embed-certs-550000" | sudo tee /etc/hostname
	I0613 12:53:14.577189   38298 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0613 12:53:17.709442   38298 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-550000
	
	I0613 12:53:17.709551   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:17.759163   38298 main.go:141] libmachine: Using SSH client type: native
	I0613 12:53:17.759507   38298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59779 <nil> <nil>}
	I0613 12:53:17.759522   38298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-550000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-550000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-550000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 12:53:17.875405   38298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:53:17.875439   38298 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 12:53:17.875465   38298 ubuntu.go:177] setting up certificates
	I0613 12:53:17.875475   38298 provision.go:83] configureAuth start
	I0613 12:53:17.875561   38298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-550000
	I0613 12:53:17.925468   38298 provision.go:138] copyHostCerts
	I0613 12:53:17.925565   38298 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 12:53:17.925578   38298 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 12:53:17.925714   38298 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 12:53:17.925937   38298 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 12:53:17.925944   38298 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 12:53:17.926017   38298 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 12:53:17.926169   38298 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 12:53:17.926175   38298 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 12:53:17.926246   38298 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 12:53:17.926378   38298 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.embed-certs-550000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-550000]
	I0613 12:53:17.978396   38298 provision.go:172] copyRemoteCerts
	I0613 12:53:17.978453   38298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 12:53:17.978512   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.028402   38298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59779 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/embed-certs-550000/id_rsa Username:docker}
	I0613 12:53:18.116863   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 12:53:18.138767   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0613 12:53:18.160764   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0613 12:53:18.182682   38298 provision.go:86] duration metric: configureAuth took 307.18378ms
	I0613 12:53:18.182696   38298 ubuntu.go:193] setting minikube options for container-runtime
	I0613 12:53:18.182853   38298 config.go:182] Loaded profile config "embed-certs-550000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:53:18.182925   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.232720   38298 main.go:141] libmachine: Using SSH client type: native
	I0613 12:53:18.233060   38298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59779 <nil> <nil>}
	I0613 12:53:18.233070   38298 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 12:53:18.351393   38298 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 12:53:18.371265   38298 ubuntu.go:71] root file system type: overlay
	I0613 12:53:18.371385   38298 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 12:53:18.371466   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.422555   38298 main.go:141] libmachine: Using SSH client type: native
	I0613 12:53:18.422906   38298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59779 <nil> <nil>}
	I0613 12:53:18.422955   38298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 12:53:18.553507   38298 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 12:53:18.553601   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.603459   38298 main.go:141] libmachine: Using SSH client type: native
	I0613 12:53:18.603801   38298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 59779 <nil> <nil>}
	I0613 12:53:18.603816   38298 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 12:53:18.727919   38298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 12:53:18.727938   38298 machine.go:91] provisioned docker machine in 4.213185063s
	I0613 12:53:18.727949   38298 start.go:300] post-start starting for "embed-certs-550000" (driver="docker")
	I0613 12:53:18.727962   38298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 12:53:18.728049   38298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 12:53:18.728120   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.779255   38298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59779 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/embed-certs-550000/id_rsa Username:docker}
	I0613 12:53:18.867708   38298 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 12:53:18.872070   38298 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 12:53:18.872092   38298 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 12:53:18.872100   38298 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 12:53:18.872107   38298 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 12:53:18.872115   38298 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 12:53:18.872200   38298 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 12:53:18.872378   38298 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 12:53:18.872558   38298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 12:53:18.881788   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:53:18.903435   38298 start.go:303] post-start completed in 175.470493ms
	I0613 12:53:18.903529   38298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:53:18.903584   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:18.953484   38298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59779 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/embed-certs-550000/id_rsa Username:docker}
	I0613 12:53:19.038087   38298 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 12:53:19.043489   38298 fix.go:56] fixHost completed within 5.000895449s
	I0613 12:53:19.043505   38298 start.go:83] releasing machines lock for "embed-certs-550000", held for 5.000939635s
	I0613 12:53:19.043585   38298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-550000
	I0613 12:53:19.093443   38298 ssh_runner.go:195] Run: cat /version.json
	I0613 12:53:19.093467   38298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 12:53:19.093522   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:19.093537   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:19.145636   38298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59779 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/embed-certs-550000/id_rsa Username:docker}
	I0613 12:53:19.145902   38298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59779 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/embed-certs-550000/id_rsa Username:docker}
	I0613 12:53:19.332418   38298 ssh_runner.go:195] Run: systemctl --version
	I0613 12:53:19.337733   38298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 12:53:19.343727   38298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 12:53:19.362228   38298 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 12:53:19.362314   38298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0613 12:53:19.372169   38298 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0613 12:53:19.372194   38298 start.go:464] detecting cgroup driver to use...
	I0613 12:53:19.372214   38298 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:53:19.372345   38298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:53:19.388876   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0613 12:53:19.399414   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 12:53:19.409771   38298 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 12:53:19.409842   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 12:53:19.420381   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:53:19.430589   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 12:53:19.440641   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 12:53:19.450736   38298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 12:53:19.460266   38298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 12:53:19.470452   38298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 12:53:19.479083   38298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 12:53:19.487771   38298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:53:19.560395   38298 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 12:53:19.642597   38298 start.go:464] detecting cgroup driver to use...
	I0613 12:53:19.642618   38298 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 12:53:19.642689   38298 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 12:53:19.654994   38298 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 12:53:19.655070   38298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 12:53:19.667252   38298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 12:53:19.684902   38298 ssh_runner.go:195] Run: which cri-dockerd
	I0613 12:53:19.690757   38298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 12:53:19.700390   38298 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 12:53:19.720388   38298 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 12:53:19.827214   38298 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 12:53:19.924186   38298 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 12:53:19.924204   38298 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 12:53:19.941357   38298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:53:20.041059   38298 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 12:53:20.321061   38298 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 12:53:20.395350   38298 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0613 12:53:20.464449   38298 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 12:53:20.533592   38298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:53:20.604022   38298 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0613 12:53:20.635475   38298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 12:53:20.710010   38298 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0613 12:53:20.799773   38298 start.go:511] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0613 12:53:20.799884   38298 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0613 12:53:20.805092   38298 start.go:532] Will wait 60s for crictl version
	I0613 12:53:20.805163   38298 ssh_runner.go:195] Run: which crictl
	I0613 12:53:20.809671   38298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0613 12:53:20.859893   38298 start.go:548] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1
	I0613 12:53:20.859970   38298 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:53:20.888269   38298 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 12:53:22.503497   37621 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0613 12:53:22.503722   37621 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0613 12:53:22.503738   37621 kubeadm.go:322] 
	I0613 12:53:22.503780   37621 kubeadm.go:322] Unfortunately, an error has occurred:
	I0613 12:53:22.503817   37621 kubeadm.go:322] 	timed out waiting for the condition
	I0613 12:53:22.503821   37621 kubeadm.go:322] 
	I0613 12:53:22.503894   37621 kubeadm.go:322] This error is likely caused by:
	I0613 12:53:22.503936   37621 kubeadm.go:322] 	- The kubelet is not running
	I0613 12:53:22.504054   37621 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0613 12:53:22.504064   37621 kubeadm.go:322] 
	I0613 12:53:22.504194   37621 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0613 12:53:22.504255   37621 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0613 12:53:22.504295   37621 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0613 12:53:22.504301   37621 kubeadm.go:322] 
	I0613 12:53:22.504412   37621 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0613 12:53:22.504513   37621 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0613 12:53:22.504607   37621 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0613 12:53:22.504668   37621 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0613 12:53:22.504758   37621 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0613 12:53:22.504783   37621 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0613 12:53:22.507330   37621 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0613 12:53:22.507416   37621 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0613 12:53:22.507527   37621 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
	I0613 12:53:22.507616   37621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0613 12:53:22.507698   37621 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0613 12:53:22.507759   37621 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0613 12:53:22.507784   37621 kubeadm.go:406] StartCluster complete in 8m6.705838356s
	I0613 12:53:22.507876   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0613 12:53:22.529672   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.529686   37621 logs.go:286] No container was found matching "kube-apiserver"
	I0613 12:53:22.529762   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0613 12:53:22.551294   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.551308   37621 logs.go:286] No container was found matching "etcd"
	I0613 12:53:22.551391   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0613 12:53:22.572714   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.572726   37621 logs.go:286] No container was found matching "coredns"
	I0613 12:53:22.572797   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0613 12:53:22.594327   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.594344   37621 logs.go:286] No container was found matching "kube-scheduler"
	I0613 12:53:22.594421   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0613 12:53:22.615769   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.615783   37621 logs.go:286] No container was found matching "kube-proxy"
	I0613 12:53:22.615855   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0613 12:53:22.636233   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.636248   37621 logs.go:286] No container was found matching "kube-controller-manager"
	I0613 12:53:22.636320   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0613 12:53:22.656691   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.656705   37621 logs.go:286] No container was found matching "kindnet"
	I0613 12:53:22.656773   37621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0613 12:53:22.678048   37621 logs.go:284] 0 containers: []
	W0613 12:53:22.678062   37621 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0613 12:53:22.678071   37621 logs.go:123] Gathering logs for describe nodes ...
	I0613 12:53:22.678079   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0613 12:53:22.737026   37621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0613 12:53:22.737040   37621 logs.go:123] Gathering logs for Docker ...
	I0613 12:53:22.737048   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0613 12:53:22.752861   37621 logs.go:123] Gathering logs for container status ...
	I0613 12:53:22.752873   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0613 12:53:22.805722   37621 logs.go:123] Gathering logs for kubelet ...
	I0613 12:53:22.805737   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0613 12:53:22.846675   37621 logs.go:123] Gathering logs for dmesg ...
	I0613 12:53:22.846689   37621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0613 12:53:22.861057   37621 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0613 12:53:22.861083   37621 out.go:239] * 
	W0613 12:53:22.861138   37621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:53:22.861156   37621 out.go:239] * 
	W0613 12:53:22.861804   37621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0613 12:53:22.927794   37621 out.go:177] 
	W0613 12:53:22.970591   37621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0613 12:53:22.970661   37621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0613 12:53:22.970692   37621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0613 12:53:23.014693   37621 out.go:177] 
	I0613 12:53:20.962367   38298 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0613 12:53:20.962569   38298 cli_runner.go:164] Run: docker exec -t embed-certs-550000 dig +short host.docker.internal
	I0613 12:53:21.081206   38298 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 12:53:21.081351   38298 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 12:53:21.086697   38298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:53:21.098082   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:21.148058   38298 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 12:53:21.148132   38298 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:53:21.170043   38298 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0613 12:53:21.170067   38298 docker.go:566] Images already preloaded, skipping extraction
	I0613 12:53:21.170157   38298 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 12:53:21.191051   38298 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0613 12:53:21.191076   38298 cache_images.go:84] Images are preloaded, skipping loading
	I0613 12:53:21.191164   38298 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 12:53:21.239220   38298 cni.go:84] Creating CNI manager for ""
	I0613 12:53:21.239237   38298 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 12:53:21.239255   38298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 12:53:21.239271   38298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-550000 NodeName:embed-certs-550000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0613 12:53:21.239393   38298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-550000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 12:53:21.239466   38298 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-550000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:embed-certs-550000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 12:53:21.239542   38298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0613 12:53:21.248572   38298 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 12:53:21.248628   38298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 12:53:21.257248   38298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0613 12:53:21.273680   38298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 12:53:21.290262   38298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0613 12:53:21.306890   38298 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0613 12:53:21.311280   38298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 12:53:21.323000   38298 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000 for IP: 192.168.76.2
	I0613 12:53:21.323020   38298 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:53:21.323172   38298 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 12:53:21.323223   38298 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 12:53:21.323321   38298 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/client.key
	I0613 12:53:21.323401   38298 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/apiserver.key.31bdca25
	I0613 12:53:21.323456   38298 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/proxy-client.key
	I0613 12:53:21.323669   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 12:53:21.323713   38298 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 12:53:21.323725   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 12:53:21.323768   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 12:53:21.323803   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 12:53:21.323835   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 12:53:21.323906   38298 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 12:53:21.324537   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 12:53:21.346508   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0613 12:53:21.368490   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 12:53:21.390660   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/embed-certs-550000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0613 12:53:21.412747   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 12:53:21.434346   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 12:53:21.456745   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 12:53:21.478731   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 12:53:21.500767   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 12:53:21.522743   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 12:53:21.544988   38298 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 12:53:21.568124   38298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 12:53:21.585599   38298 ssh_runner.go:195] Run: openssl version
	I0613 12:53:21.592606   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 12:53:21.603372   38298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:53:21.608065   38298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:53:21.608129   38298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 12:53:21.615763   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 12:53:21.625714   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 12:53:21.636069   38298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 12:53:21.640967   38298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 12:53:21.641025   38298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 12:53:21.648798   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 12:53:21.658133   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 12:53:21.668593   38298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 12:53:21.673335   38298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 12:53:21.673386   38298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 12:53:21.680619   38298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 12:53:21.689710   38298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 12:53:21.693992   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0613 12:53:21.701084   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0613 12:53:21.708320   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0613 12:53:21.715807   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0613 12:53:21.723090   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0613 12:53:21.730172   38298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0613 12:53:21.737078   38298 kubeadm.go:404] StartCluster: {Name:embed-certs-550000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-550000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 12:53:21.737186   38298 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 12:53:21.758713   38298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 12:53:21.767979   38298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0613 12:53:21.767995   38298 kubeadm.go:636] restartCluster start
	I0613 12:53:21.768053   38298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0613 12:53:21.776629   38298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:53:21.776705   38298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-550000
	I0613 12:53:21.827031   38298 kubeconfig.go:135] verify returned: extract IP: "embed-certs-550000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 12:53:21.827219   38298 kubeconfig.go:146] "embed-certs-550000" context is missing from /Users/jenkins/minikube-integration/15003-20351/kubeconfig - will repair!
	I0613 12:53:21.827546   38298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 12:53:21.829215   38298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0613 12:53:21.838607   38298 api_server.go:166] Checking apiserver status ...
	I0613 12:53:21.838675   38298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:53:21.848652   38298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:53:22.348829   38298 api_server.go:166] Checking apiserver status ...
	I0613 12:53:22.348945   38298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:53:22.360265   38298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:53:22.848906   38298 api_server.go:166] Checking apiserver status ...
	I0613 12:53:22.849028   38298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 12:53:22.859481   38298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:53:23.348750   38298 api_server.go:166] Checking apiserver status ...
	I0613 12:53:23.348814   38298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> Docker <==
	* Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.767650479Z" level=info msg="Loading containers: start."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.856306269Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.893875182Z" level=info msg="Loading containers: done."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902742638Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902801729Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932004572Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932050353Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:03 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.749280155Z" level=info msg="Processing signal 'terminated'"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750223589Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750398147Z" level=info msg="Daemon shutdown complete"
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: docker.service: Deactivated successfully.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Starting Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.806954552Z" level=info msg="Starting up"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.952568721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.130460518Z" level=info msg="Loading containers: start."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.220890932Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.259079227Z" level=info msg="Loading containers: done."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268343604Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268406325Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296356733Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296512735Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:12 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-06-13T19:53:24Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:53:24 up  2:52,  0 users,  load average: 0.91, 0.98, 1.19
	Linux old-k8s-version-554000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: I0613 19:53:23.904337   16797 server.go:410] Version: v1.16.0
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: I0613 19:53:23.904505   16797 plugins.go:100] No cloud provider specified.
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: I0613 19:53:23.904514   16797 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: I0613 19:53:23.906270   16797 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: W0613 19:53:23.908366   16797 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: W0613 19:53:23.908502   16797 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 19:53:23 old-k8s-version-554000 kubelet[16797]: F0613 19:53:23.908800   16797 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 19:53:23 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 19:53:24 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Jun 13 19:53:24 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 19:53:24 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: I0613 19:53:24.659639   16898 server.go:410] Version: v1.16.0
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: I0613 19:53:24.659985   16898 plugins.go:100] No cloud provider specified.
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: I0613 19:53:24.660027   16898 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: I0613 19:53:24.664269   16898 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: W0613 19:53:24.665162   16898 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: W0613 19:53:24.665227   16898 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 19:53:24 old-k8s-version-554000 kubelet[16898]: F0613 19:53:24.665252   16898 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 19:53:24 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 19:53:24 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:53:24.700008   38385 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (356.146213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-554000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:53:48.482231   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:54:27.498867   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:54:38.921792   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:55:05.958678   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:55:19.675132   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:06.051768   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:10.145954   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.152469   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.164621   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.186461   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.226602   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.307211   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.467404   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:10.789569   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:11.431836   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:12.712112   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:15.272452   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:20.393609   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:29.003693   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:56:30.634087   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
E0613 12:56:31.835959   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:38.994741   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:56:42.425891   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 12:56:46.456200   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:56:51.115164   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:57:23.744856   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:57:29.102636   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:57:32.076553   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:57:54.890879   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:58:02.051219   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:58:02.755776   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:58:15.832799   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:58:22.730353   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:58:46.791183   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:59:25.809101   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 12:59:27.507234   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:00:19.683692   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:00:50.559198   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:01:06.061724   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:01:10.154728   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:01:31.845877   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 13:01:37.844592   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:01:39.004858   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 13:01:42.434472   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 13:01:46.465113   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:02:23.752800   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (358.640976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-554000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:44:58.001656032Z",
	            "FinishedAt": "2023-06-13T19:44:55.288857813Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91eea7ce06a736bcebc6e16ec019e29531e35edc0efa8dd27d1bdcf8954dcd78",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59652"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59653"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59654"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59655"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59656"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/91eea7ce06a7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "522ec1a96f4f34c0dab581c1f3f60535ac037d9119dddf68b5c26fab103cb29c",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (358.095308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25: (1.397307958s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-554000        | old-k8s-version-554000       | jenkins | v1.30.1 | 13 Jun 23 12:43 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-554000                              | old-k8s-version-554000       | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT | 13 Jun 23 12:44 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-554000             | old-k8s-version-554000       | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT | 13 Jun 23 12:44 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-554000                              | old-k8s-version-554000       | jenkins | v1.30.1 | 13 Jun 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-874000 sudo                              | no-preload-874000            | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-874000                                   | no-preload-874000            | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-874000                                   | no-preload-874000            | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-874000                                   | no-preload-874000            | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	| delete  | -p no-preload-874000                                   | no-preload-874000            | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:51 PDT |
	| start   | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:51 PDT | 13 Jun 23 12:52 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-550000            | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-550000                 | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:53 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:53 PDT | 13 Jun 23 12:58 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-550000 sudo                             | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	| delete  | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	| delete  | -p                                                     | disable-driver-mounts-899000 | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | disable-driver-mounts-899000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 13:00 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690000  | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690000       | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT |                     |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 13:00:25
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 13:00:25.846885   38897 out.go:296] Setting OutFile to fd 1 ...
	I0613 13:00:25.847067   38897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 13:00:25.847072   38897 out.go:309] Setting ErrFile to fd 2...
	I0613 13:00:25.847077   38897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 13:00:25.847189   38897 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 13:00:25.848615   38897 out.go:303] Setting JSON to false
	I0613 13:00:25.867620   38897 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10796,"bootTime":1686675629,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 13:00:25.867699   38897 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 13:00:25.889094   38897 out.go:177] * [default-k8s-diff-port-690000] minikube v1.30.1 on Darwin 13.4
	I0613 13:00:25.931432   38897 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 13:00:25.931476   38897 notify.go:220] Checking for updates...
	I0613 13:00:25.974092   38897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 13:00:25.995067   38897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 13:00:26.016081   38897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 13:00:26.037063   38897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 13:00:26.058214   38897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 13:00:26.079178   38897 config.go:182] Loaded profile config "default-k8s-diff-port-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 13:00:26.079567   38897 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 13:00:26.135250   38897 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 13:00:26.135374   38897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 13:00:26.232731   38897 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 20:00:26.220783805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 13:00:26.254527   38897 out.go:177] * Using the docker driver based on existing profile
	I0613 13:00:26.276376   38897 start.go:297] selected driver: docker
	I0613 13:00:26.276398   38897 start.go:884] validating driver "docker" against &{Name:default-k8s-diff-port-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-690000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:00:26.276529   38897 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 13:00:26.280484   38897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 13:00:26.373562   38897 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 20:00:26.36199749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path
:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<n
il>}}
	I0613 13:00:26.373814   38897 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0613 13:00:26.373838   38897 cni.go:84] Creating CNI manager for ""
	I0613 13:00:26.373850   38897 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:00:26.373862   38897 start_flags.go:319] config:
	{Name:default-k8s-diff-port-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:00:26.421030   38897 out.go:177] * Starting control plane node default-k8s-diff-port-690000 in cluster default-k8s-diff-port-690000
	I0613 13:00:26.442274   38897 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 13:00:26.463338   38897 out.go:177] * Pulling base image ...
	I0613 13:00:26.507218   38897 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 13:00:26.507239   38897 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 13:00:26.507325   38897 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 13:00:26.507355   38897 cache.go:57] Caching tarball of preloaded images
	I0613 13:00:26.508235   38897 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 13:00:26.508464   38897 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0613 13:00:26.508859   38897 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/config.json ...
	I0613 13:00:26.557417   38897 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 13:00:26.557439   38897 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 13:00:26.557460   38897 cache.go:195] Successfully downloaded all kic artifacts
	I0613 13:00:26.557546   38897 start.go:365] acquiring machines lock for default-k8s-diff-port-690000: {Name:mk06c059c74c0997200727a14b97a69e6a2e5b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 13:00:26.557664   38897 start.go:369] acquired machines lock for "default-k8s-diff-port-690000" in 93.916µs
	I0613 13:00:26.557694   38897 start.go:96] Skipping create...Using existing machine configuration
	I0613 13:00:26.557707   38897 fix.go:54] fixHost starting: 
	I0613 13:00:26.557955   38897 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-690000 --format={{.State.Status}}
	I0613 13:00:26.606729   38897 fix.go:102] recreateIfNeeded on default-k8s-diff-port-690000: state=Stopped err=<nil>
	W0613 13:00:26.606761   38897 fix.go:128] unexpected machine state, will restart: <nil>
	I0613 13:00:26.628761   38897 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-690000" ...
	I0613 13:00:26.650547   38897 cli_runner.go:164] Run: docker start default-k8s-diff-port-690000
	I0613 13:00:26.915447   38897 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-690000 --format={{.State.Status}}
	I0613 13:00:26.967969   38897 kic.go:426] container "default-k8s-diff-port-690000" state is running.
	I0613 13:00:26.968584   38897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-690000
	I0613 13:00:27.025729   38897 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/config.json ...
	I0613 13:00:27.026144   38897 machine.go:88] provisioning docker machine ...
	I0613 13:00:27.026169   38897 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-690000"
	I0613 13:00:27.026243   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:27.087938   38897 main.go:141] libmachine: Using SSH client type: native
	I0613 13:00:27.088480   38897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60377 <nil> <nil>}
	I0613 13:00:27.088500   38897 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-690000 && echo "default-k8s-diff-port-690000" | sudo tee /etc/hostname
	I0613 13:00:27.089871   38897 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0613 13:00:30.224562   38897 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-690000
	
	I0613 13:00:30.224668   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:30.274342   38897 main.go:141] libmachine: Using SSH client type: native
	I0613 13:00:30.274674   38897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60377 <nil> <nil>}
	I0613 13:00:30.274690   38897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-690000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-690000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-690000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 13:00:30.394593   38897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 13:00:30.394622   38897 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 13:00:30.394643   38897 ubuntu.go:177] setting up certificates
	I0613 13:00:30.394657   38897 provision.go:83] configureAuth start
	I0613 13:00:30.394744   38897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-690000
	I0613 13:00:30.444140   38897 provision.go:138] copyHostCerts
	I0613 13:00:30.444236   38897 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 13:00:30.444246   38897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 13:00:30.444373   38897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 13:00:30.444593   38897 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 13:00:30.444600   38897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 13:00:30.444662   38897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 13:00:30.444821   38897 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 13:00:30.444827   38897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 13:00:30.444887   38897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 13:00:30.445012   38897 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-690000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-690000]
	I0613 13:00:30.804313   38897 provision.go:172] copyRemoteCerts
	I0613 13:00:30.804376   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 13:00:30.804430   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:30.877321   38897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60377 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/default-k8s-diff-port-690000/id_rsa Username:docker}
	I0613 13:00:30.965797   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 13:00:30.987456   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0613 13:00:31.009248   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0613 13:00:31.031265   38897 provision.go:86] duration metric: configureAuth took 636.576018ms
	I0613 13:00:31.031279   38897 ubuntu.go:193] setting minikube options for container-runtime
	I0613 13:00:31.031424   38897 config.go:182] Loaded profile config "default-k8s-diff-port-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 13:00:31.031494   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.082042   38897 main.go:141] libmachine: Using SSH client type: native
	I0613 13:00:31.082388   38897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60377 <nil> <nil>}
	I0613 13:00:31.082399   38897 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 13:00:31.201478   38897 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 13:00:31.201502   38897 ubuntu.go:71] root file system type: overlay
	I0613 13:00:31.201595   38897 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 13:00:31.201703   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.251314   38897 main.go:141] libmachine: Using SSH client type: native
	I0613 13:00:31.251661   38897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60377 <nil> <nil>}
	I0613 13:00:31.251713   38897 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 13:00:31.380127   38897 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 13:00:31.380254   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.430886   38897 main.go:141] libmachine: Using SSH client type: native
	I0613 13:00:31.431239   38897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60377 <nil> <nil>}
	I0613 13:00:31.431255   38897 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 13:00:31.555446   38897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 13:00:31.555464   38897 machine.go:91] provisioned docker machine in 4.529181299s
	I0613 13:00:31.555474   38897 start.go:300] post-start starting for "default-k8s-diff-port-690000" (driver="docker")
	I0613 13:00:31.555484   38897 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 13:00:31.555558   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 13:00:31.555612   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.605733   38897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60377 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/default-k8s-diff-port-690000/id_rsa Username:docker}
	I0613 13:00:31.694821   38897 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 13:00:31.698937   38897 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 13:00:31.698966   38897 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 13:00:31.698974   38897 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 13:00:31.698981   38897 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 13:00:31.698989   38897 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 13:00:31.699071   38897 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 13:00:31.699223   38897 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 13:00:31.699384   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 13:00:31.707988   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 13:00:31.729880   38897 start.go:303] post-start completed in 174.392096ms
	I0613 13:00:31.729983   38897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 13:00:31.730041   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.779818   38897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60377 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/default-k8s-diff-port-690000/id_rsa Username:docker}
	I0613 13:00:31.865238   38897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 13:00:31.870552   38897 fix.go:56] fixHost completed within 5.312690998s
	I0613 13:00:31.870566   38897 start.go:83] releasing machines lock for "default-k8s-diff-port-690000", held for 5.312741484s
	I0613 13:00:31.870675   38897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-690000
	I0613 13:00:31.920397   38897 ssh_runner.go:195] Run: cat /version.json
	I0613 13:00:31.920413   38897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 13:00:31.920476   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.920496   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:31.974026   38897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60377 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/default-k8s-diff-port-690000/id_rsa Username:docker}
	I0613 13:00:31.974794   38897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60377 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/default-k8s-diff-port-690000/id_rsa Username:docker}
	I0613 13:00:32.162896   38897 ssh_runner.go:195] Run: systemctl --version
	I0613 13:00:32.168470   38897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 13:00:32.174224   38897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 13:00:32.192461   38897 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 13:00:32.192532   38897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0613 13:00:32.201560   38897 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0613 13:00:32.201574   38897 start.go:464] detecting cgroup driver to use...
	I0613 13:00:32.201595   38897 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 13:00:32.201714   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 13:00:32.217363   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0613 13:00:32.227957   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 13:00:32.238192   38897 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 13:00:32.238263   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 13:00:32.248433   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 13:00:32.258470   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 13:00:32.268349   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 13:00:32.278592   38897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 13:00:32.288127   38897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 13:00:32.298300   38897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 13:00:32.307010   38897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 13:00:32.315711   38897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:00:32.392773   38897 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 13:00:32.467547   38897 start.go:464] detecting cgroup driver to use...
	I0613 13:00:32.467572   38897 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 13:00:32.467655   38897 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 13:00:32.479501   38897 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 13:00:32.479575   38897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 13:00:32.493044   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 13:00:32.510269   38897 ssh_runner.go:195] Run: which cri-dockerd
	I0613 13:00:32.515117   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 13:00:32.524848   38897 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 13:00:32.546294   38897 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 13:00:32.690765   38897 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 13:00:32.758308   38897 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 13:00:32.758328   38897 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 13:00:32.796350   38897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:00:32.879244   38897 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 13:00:33.170190   38897 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 13:00:33.244653   38897 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0613 13:00:33.312519   38897 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 13:00:33.384830   38897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:00:33.459203   38897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0613 13:00:33.479246   38897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:00:33.545985   38897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0613 13:00:33.641295   38897 start.go:511] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0613 13:00:33.641412   38897 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0613 13:00:33.646498   38897 start.go:532] Will wait 60s for crictl version
	I0613 13:00:33.646565   38897 ssh_runner.go:195] Run: which crictl
	I0613 13:00:33.651474   38897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0613 13:00:33.701461   38897 start.go:548] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1
	I0613 13:00:33.701549   38897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 13:00:33.728959   38897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 13:00:33.780440   38897 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0613 13:00:33.780618   38897 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-690000 dig +short host.docker.internal
	I0613 13:00:33.893271   38897 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 13:00:33.893395   38897 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 13:00:33.898626   38897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 13:00:33.910209   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:33.960423   38897 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 13:00:33.960510   38897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 13:00:33.981709   38897 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0613 13:00:33.981736   38897 docker.go:566] Images already preloaded, skipping extraction
	I0613 13:00:33.981818   38897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 13:00:34.002676   38897 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0613 13:00:34.002700   38897 cache_images.go:84] Images are preloaded, skipping loading
	I0613 13:00:34.002796   38897 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 13:00:34.052738   38897 cni.go:84] Creating CNI manager for ""
	I0613 13:00:34.052756   38897 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:00:34.052774   38897 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0613 13:00:34.052791   38897 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-690000 NodeName:default-k8s-diff-port-690000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0613 13:00:34.052910   38897 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-690000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 13:00:34.052985   38897 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-690000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0613 13:00:34.053051   38897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0613 13:00:34.062379   38897 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 13:00:34.062437   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 13:00:34.071180   38897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0613 13:00:34.087597   38897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 13:00:34.103931   38897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0613 13:00:34.120883   38897 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0613 13:00:34.125199   38897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 13:00:34.136398   38897 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000 for IP: 192.168.76.2
	I0613 13:00:34.136415   38897 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:00:34.136590   38897 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 13:00:34.136646   38897 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 13:00:34.136758   38897 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/client.key
	I0613 13:00:34.136824   38897 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/apiserver.key.31bdca25
	I0613 13:00:34.136878   38897 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/proxy-client.key
	I0613 13:00:34.137089   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 13:00:34.137134   38897 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 13:00:34.137146   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 13:00:34.137180   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 13:00:34.137214   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 13:00:34.137245   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 13:00:34.137316   38897 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 13:00:34.137855   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 13:00:34.160322   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0613 13:00:34.182261   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 13:00:34.204060   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/default-k8s-diff-port-690000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0613 13:00:34.226042   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 13:00:34.248349   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 13:00:34.270388   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 13:00:34.292356   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 13:00:34.314355   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 13:00:34.337200   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 13:00:34.361296   38897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 13:00:34.385803   38897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 13:00:34.403951   38897 ssh_runner.go:195] Run: openssl version
	I0613 13:00:34.410583   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 13:00:34.421236   38897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 13:00:34.425647   38897 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 13:00:34.425718   38897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 13:00:34.432852   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 13:00:34.442237   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 13:00:34.451851   38897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:00:34.456316   38897 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:00:34.456378   38897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:00:34.463626   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 13:00:34.473324   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 13:00:34.483205   38897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 13:00:34.487633   38897 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 13:00:34.487677   38897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 13:00:34.494808   38897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 13:00:34.504288   38897 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 13:00:34.508655   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0613 13:00:34.515835   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0613 13:00:34.523057   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0613 13:00:34.530299   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0613 13:00:34.537456   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0613 13:00:34.544777   38897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0613 13:00:34.551780   38897 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-690000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:00:34.551902   38897 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 13:00:34.573554   38897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 13:00:34.582839   38897 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0613 13:00:34.582855   38897 kubeadm.go:636] restartCluster start
	I0613 13:00:34.582908   38897 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0613 13:00:34.591512   38897 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:34.591641   38897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-690000
	I0613 13:00:34.643129   38897 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-690000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 13:00:34.643317   38897 kubeconfig.go:146] "default-k8s-diff-port-690000" context is missing from /Users/jenkins/minikube-integration/15003-20351/kubeconfig - will repair!
	I0613 13:00:34.643639   38897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:00:34.645097   38897 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0613 13:00:34.654491   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:34.654565   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:34.664782   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:35.166808   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:35.166906   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:35.178423   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:35.665668   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:35.665903   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:35.678464   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:36.164970   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:36.165046   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:36.176242   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:36.665033   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:36.665117   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:36.676348   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:37.166448   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:37.166651   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:37.179157   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:37.665718   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:37.665854   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:37.678545   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:38.165059   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:38.165147   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:38.176172   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:38.665009   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:38.665114   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:38.676939   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:39.167045   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:39.167206   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:39.179710   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:39.665140   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:39.665233   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:39.676269   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:40.167075   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:40.167267   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:40.179981   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:40.667112   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:40.667342   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:40.680312   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:41.165099   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:41.165171   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:41.176223   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:41.666371   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:41.666538   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:41.679060   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:42.166121   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:42.166312   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:42.179258   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:42.665268   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:42.665332   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:42.676524   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:43.166220   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:43.166421   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:43.179160   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:43.665389   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:43.665519   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:43.677024   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:44.165446   38897 api_server.go:166] Checking apiserver status ...
	I0613 13:00:44.165519   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:00:44.176868   38897 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:44.655469   38897 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0613 13:00:44.655529   38897 kubeadm.go:1128] stopping kube-system containers ...
	I0613 13:00:44.655778   38897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 13:00:44.680397   38897 docker.go:462] Stopping containers: [886e9af0ec01 b002248a12af fa72d3874576 9b6a8d8de016 1c6321dcf0b8 dcf49a958365 45d4fe489b02 fff07d2936ac a85007bf1e6a 487daee31058 861d62eb865e 4586aa0f012f 2669c662c1d1 9657e3b8c150 70621b5de898 66a0b308350c]
	I0613 13:00:44.680487   38897 ssh_runner.go:195] Run: docker stop 886e9af0ec01 b002248a12af fa72d3874576 9b6a8d8de016 1c6321dcf0b8 dcf49a958365 45d4fe489b02 fff07d2936ac a85007bf1e6a 487daee31058 861d62eb865e 4586aa0f012f 2669c662c1d1 9657e3b8c150 70621b5de898 66a0b308350c
	I0613 13:00:44.702385   38897 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0613 13:00:44.714664   38897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 13:00:44.723728   38897 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 13 19:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 13 19:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jun 13 19:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 13 19:59 /etc/kubernetes/scheduler.conf
	
	I0613 13:00:44.723793   38897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0613 13:00:44.732988   38897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0613 13:00:44.742072   38897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0613 13:00:44.750843   38897 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:44.750902   38897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0613 13:00:44.759670   38897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0613 13:00:44.768437   38897 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:00:44.768500   38897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0613 13:00:44.777004   38897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 13:00:44.785978   38897 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0613 13:00:44.785995   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:44.836382   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:45.591420   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:45.728143   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:45.781329   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:45.893340   38897 api_server.go:52] waiting for apiserver process to appear ...
	I0613 13:00:45.893419   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:00:46.404977   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:00:46.905583   38897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:00:46.919846   38897 api_server.go:72] duration metric: took 1.02647864s to wait for apiserver process to appear ...
	I0613 13:00:46.919862   38897 api_server.go:88] waiting for apiserver healthz status ...
	I0613 13:00:46.919879   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:46.920885   38897 api_server.go:269] stopped: https://127.0.0.1:60376/healthz: Get "https://127.0.0.1:60376/healthz": EOF
	I0613 13:00:47.420964   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:49.163793   38897 api_server.go:279] https://127.0.0.1:60376/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0613 13:00:49.163825   38897 api_server.go:103] status: https://127.0.0.1:60376/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 13:00:49.163838   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:49.200162   38897 api_server.go:279] https://127.0.0.1:60376/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0613 13:00:49.200183   38897 api_server.go:103] status: https://127.0.0.1:60376/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 13:00:49.421566   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:49.427486   38897 api_server.go:279] https://127.0.0.1:60376/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 13:00:49.427503   38897 api_server.go:103] status: https://127.0.0.1:60376/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 13:00:49.921284   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:49.927998   38897 api_server.go:279] https://127.0.0.1:60376/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 13:00:49.928017   38897 api_server.go:103] status: https://127.0.0.1:60376/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 13:00:50.421680   38897 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60376/healthz ...
	I0613 13:00:50.428864   38897 api_server.go:279] https://127.0.0.1:60376/healthz returned 200:
	ok
	I0613 13:00:50.437057   38897 api_server.go:141] control plane version: v1.27.2
	I0613 13:00:50.437076   38897 api_server.go:131] duration metric: took 3.517108007s to wait for apiserver health ...
	I0613 13:00:50.437084   38897 cni.go:84] Creating CNI manager for ""
	I0613 13:00:50.437093   38897 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:00:50.458686   38897 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0613 13:00:50.479819   38897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0613 13:00:50.491192   38897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0613 13:00:50.508150   38897 system_pods.go:43] waiting for kube-system pods to appear ...
	I0613 13:00:50.515860   38897 system_pods.go:59] 8 kube-system pods found
	I0613 13:00:50.515875   38897 system_pods.go:61] "coredns-5d78c9869d-wkkd8" [e257cbf1-0f1c-4ea2-b32a-9fea896a4b0d] Running
	I0613 13:00:50.515879   38897 system_pods.go:61] "etcd-default-k8s-diff-port-690000" [63e9f084-d754-4b1b-9ecb-0cea91347465] Running
	I0613 13:00:50.515883   38897 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-690000" [1244a169-c761-424a-938e-9b0f6dba5387] Running
	I0613 13:00:50.515886   38897 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-690000" [42387fbf-3c46-4498-9ad6-c29e35393004] Running
	I0613 13:00:50.515890   38897 system_pods.go:61] "kube-proxy-9vjzt" [9781c204-7995-4c01-befc-ff5aed24846d] Running
	I0613 13:00:50.515895   38897 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-690000" [b9e7eff2-7c7d-4bd6-b0c9-1110fea814f3] Running
	I0613 13:00:50.515900   38897 system_pods.go:61] "metrics-server-74d5c6b9c-rf52h" [afb0f735-b6d5-41fa-8620-e9099853a291] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0613 13:00:50.515908   38897 system_pods.go:61] "storage-provisioner" [ba91954f-79a7-4ca6-8550-72f360695d1c] Running
	I0613 13:00:50.515913   38897 system_pods.go:74] duration metric: took 7.747906ms to wait for pod list to return data ...
	I0613 13:00:50.515917   38897 node_conditions.go:102] verifying NodePressure condition ...
	I0613 13:00:50.518799   38897 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0613 13:00:50.518814   38897 node_conditions.go:123] node cpu capacity is 6
	I0613 13:00:50.518824   38897 node_conditions.go:105] duration metric: took 2.902627ms to run NodePressure ...
	I0613 13:00:50.518838   38897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:00:50.651108   38897 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0613 13:00:50.655303   38897 retry.go:31] will retry after 125.804466ms: kubelet not initialised
	I0613 13:00:50.785750   38897 retry.go:31] will retry after 389.476378ms: kubelet not initialised
	I0613 13:00:51.181229   38897 kubeadm.go:787] kubelet initialised
	I0613 13:00:51.181242   38897 kubeadm.go:788] duration metric: took 530.103055ms waiting for restarted kubelet to initialise ...
	I0613 13:00:51.181251   38897 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0613 13:00:51.187749   38897 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wkkd8" in "kube-system" namespace to be "Ready" ...
	I0613 13:00:53.203536   38897 pod_ready.go:102] pod "coredns-5d78c9869d-wkkd8" in "kube-system" namespace has status "Ready":"False"
	I0613 13:00:55.205455   38897 pod_ready.go:102] pod "coredns-5d78c9869d-wkkd8" in "kube-system" namespace has status "Ready":"False"
	I0613 13:00:57.201964   38897 pod_ready.go:92] pod "coredns-5d78c9869d-wkkd8" in "kube-system" namespace has status "Ready":"True"
	I0613 13:00:57.201975   38897 pod_ready.go:81] duration metric: took 6.014039536s waiting for pod "coredns-5d78c9869d-wkkd8" in "kube-system" namespace to be "Ready" ...
	I0613 13:00:57.201986   38897 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:00:59.213488   38897 pod_ready.go:102] pod "etcd-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:01.215657   38897 pod_ready.go:92] pod "etcd-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"True"
	I0613 13:01:01.215669   38897 pod_ready.go:81] duration metric: took 4.01356332s waiting for pod "etcd-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:01.215676   38897 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:01.221197   38897 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"True"
	I0613 13:01:01.221207   38897 pod_ready.go:81] duration metric: took 5.527047ms waiting for pod "kube-apiserver-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:01.221217   38897 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:03.234710   38897 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:04.734240   38897 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"True"
	I0613 13:01:04.734250   38897 pod_ready.go:81] duration metric: took 3.512928726s waiting for pod "kube-controller-manager-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:04.734257   38897 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9vjzt" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:04.739885   38897 pod_ready.go:92] pod "kube-proxy-9vjzt" in "kube-system" namespace has status "Ready":"True"
	I0613 13:01:04.739898   38897 pod_ready.go:81] duration metric: took 5.633578ms waiting for pod "kube-proxy-9vjzt" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:04.739905   38897 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:04.744970   38897 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-690000" in "kube-system" namespace has status "Ready":"True"
	I0613 13:01:04.744979   38897 pod_ready.go:81] duration metric: took 5.06758ms waiting for pod "kube-scheduler-default-k8s-diff-port-690000" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:04.744985   38897 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace to be "Ready" ...
	I0613 13:01:06.758917   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:08.759435   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:10.761789   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:13.262185   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:15.762721   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:18.259947   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:20.260958   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:22.262030   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:24.759851   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:26.760169   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:29.259900   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:31.261605   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:33.759809   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:35.762372   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:38.260188   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:40.260767   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:42.260944   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:44.261259   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:46.261974   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:48.761930   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:50.762004   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:53.261281   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:55.262064   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:01:57.760531   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:00.261098   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:02.261582   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:04.262861   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:06.761109   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:08.763298   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:10.763461   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:13.261161   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:15.262251   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:17.762242   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:20.261732   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:22.763257   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	I0613 13:02:24.763716   38897 pod_ready.go:102] pod "metrics-server-74d5c6b9c-rf52h" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.767650479Z" level=info msg="Loading containers: start."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.856306269Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.893875182Z" level=info msg="Loading containers: done."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902742638Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902801729Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932004572Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932050353Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:03 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.749280155Z" level=info msg="Processing signal 'terminated'"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750223589Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750398147Z" level=info msg="Daemon shutdown complete"
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: docker.service: Deactivated successfully.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Starting Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.806954552Z" level=info msg="Starting up"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.952568721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.130460518Z" level=info msg="Loading containers: start."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.220890932Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.259079227Z" level=info msg="Loading containers: done."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268343604Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268406325Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296356733Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296512735Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:12 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-06-13T20:02:27Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:02:27 up  3:01,  0 users,  load average: 0.66, 0.82, 0.98
	Linux old-k8s-version-554000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 880.
	Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: I0613 20:02:26.918733   26090 server.go:410] Version: v1.16.0
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: I0613 20:02:26.919026   26090 plugins.go:100] No cloud provider specified.
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: I0613 20:02:26.919037   26090 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: I0613 20:02:26.921424   26090 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: W0613 20:02:26.922141   26090 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: W0613 20:02:26.922211   26090 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 20:02:26 old-k8s-version-554000 kubelet[26090]: F0613 20:02:26.922238   26090 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 20:02:26 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 20:02:27 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 881.
	Jun 13 20:02:27 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 20:02:27 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: I0613 20:02:27.666743   26169 server.go:410] Version: v1.16.0
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: I0613 20:02:27.666970   26169 plugins.go:100] No cloud provider specified.
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: I0613 20:02:27.666982   26169 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: I0613 20:02:27.668796   26169 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: W0613 20:02:27.669630   26169 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: W0613 20:02:27.669702   26169 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 20:02:27 old-k8s-version-554000 kubelet[26169]: F0613 20:02:27.669731   26169 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 20:02:27 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 20:02:27 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 13:02:27.809309   39019 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (357.433561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-554000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (417.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:03:02.763965   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 13:03:05.494632   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:03:15.841894   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:03:48.498303   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:04:27.517811   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kubenet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:05:05.975576   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:05:19.690502   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:06:31.853130   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 13:06:39.013497   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:06:42.441274   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:08:02.774253   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:08:15.849934   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0613 13:08:48.508734   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59656/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (353.755753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-554000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.783µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-554000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-554000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783",
	        "Created": "2023-06-13T19:39:30.478126252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-13T19:44:58.001656032Z",
	            "FinishedAt": "2023-06-13T19:44:55.288857813Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hostname",
	        "HostsPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/hosts",
	        "LogPath": "/var/lib/docker/containers/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783/48c9404a6c77e63d13b8527df2e766f7494743f6ed8ef4ef7d353a3c849b5783-json.log",
	        "Name": "/old-k8s-version-554000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-554000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-554000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df-init/diff:/var/lib/docker/overlay2/74bda2c4a8f6b504659e5538348208ad822d571d6fa10595261cb1058929d560/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de878bcaa749eafa8ac71244ed4b4d7d8d82a97c174c77cf81b9fcb77569f3df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-554000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-554000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-554000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-554000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91eea7ce06a736bcebc6e16ec019e29531e35edc0efa8dd27d1bdcf8954dcd78",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59652"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59653"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59654"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59655"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59656"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/91eea7ce06a7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-554000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "48c9404a6c77",
	                        "old-k8s-version-554000"
	                    ],
	                    "NetworkID": "a540b9a7399007dd984f1d1dd3d2c1d41a530fe412b07cd734dab04e1b578830",
	                    "EndpointID": "522ec1a96f4f34c0dab581c1f3f60535ac037d9119dddf68b5c26fab103cb29c",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (355.846092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-554000 logs -n 25: (1.398255773s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	| delete  | -p embed-certs-550000                                  | embed-certs-550000           | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	| delete  | -p                                                     | disable-driver-mounts-899000 | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 12:59 PDT |
	|         | disable-driver-mounts-899000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 12:59 PDT | 13 Jun 23 13:00 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690000  | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690000       | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:00 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:00 PDT | 13 Jun 23 13:05 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:05 PDT | 13 Jun 23 13:05 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:05 PDT | 13 Jun 23 13:05 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:05 PDT | 13 Jun 23 13:05 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-690000 | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | default-k8s-diff-port-690000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-802000 --memory=2200 --alsologtostderr   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.2          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-802000             | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-802000                                   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-802000                  | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:06 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-802000 --memory=2200 --alsologtostderr   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:06 PDT | 13 Jun 23 13:07 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.2          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-802000 sudo                              | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:07 PDT | 13 Jun 23 13:07 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-802000                                   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:07 PDT | 13 Jun 23 13:07 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-802000                                   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:07 PDT | 13 Jun 23 13:07 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-802000                                   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:07 PDT | 13 Jun 23 13:07 PDT |
	| delete  | -p newest-cni-802000                                   | newest-cni-802000            | jenkins | v1.30.1 | 13 Jun 23 13:07 PDT | 13 Jun 23 13:07 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 13:06:50
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 13:06:50.137738   39397 out.go:296] Setting OutFile to fd 1 ...
	I0613 13:06:50.137923   39397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 13:06:50.137928   39397 out.go:309] Setting ErrFile to fd 2...
	I0613 13:06:50.137932   39397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 13:06:50.138044   39397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 13:06:50.139437   39397 out.go:303] Setting JSON to false
	I0613 13:06:50.158619   39397 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11181,"bootTime":1686675629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 13:06:50.158709   39397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 13:06:50.180718   39397 out.go:177] * [newest-cni-802000] minikube v1.30.1 on Darwin 13.4
	I0613 13:06:50.222633   39397 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 13:06:50.222619   39397 notify.go:220] Checking for updates...
	I0613 13:06:50.243521   39397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 13:06:50.264859   39397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 13:06:50.286796   39397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 13:06:50.329445   39397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 13:06:50.350745   39397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 13:06:50.372093   39397 config.go:182] Loaded profile config "newest-cni-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 13:06:50.372910   39397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 13:06:50.429133   39397 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 13:06:50.429255   39397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 13:06:50.522721   39397 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 20:06:50.511872423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 13:06:50.565001   39397 out.go:177] * Using the docker driver based on existing profile
	I0613 13:06:50.586017   39397 start.go:297] selected driver: docker
	I0613 13:06:50.586037   39397 start.go:884] validating driver "docker" against &{Name:newest-cni-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-802000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:06:50.586164   39397 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 13:06:50.590173   39397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 13:06:50.703038   39397 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 20:06:50.679245346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 13:06:50.703263   39397 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0613 13:06:50.703285   39397 cni.go:84] Creating CNI manager for ""
	I0613 13:06:50.703298   39397 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:06:50.703309   39397 start_flags.go:319] config:
	{Name:newest-cni-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:06:50.746625   39397 out.go:177] * Starting control plane node newest-cni-802000 in cluster newest-cni-802000
	I0613 13:06:50.767758   39397 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 13:06:50.809776   39397 out.go:177] * Pulling base image ...
	I0613 13:06:50.830730   39397 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 13:06:50.830736   39397 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 13:06:50.830829   39397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 13:06:50.830855   39397 cache.go:57] Caching tarball of preloaded images
	I0613 13:06:50.831827   39397 preload.go:174] Found /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0613 13:06:50.831915   39397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0613 13:06:50.832343   39397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/config.json ...
	I0613 13:06:50.881720   39397 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0613 13:06:50.881827   39397 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0613 13:06:50.881845   39397 cache.go:195] Successfully downloaded all kic artifacts
	I0613 13:06:50.881891   39397 start.go:365] acquiring machines lock for newest-cni-802000: {Name:mk7a6c6b9ef8b195070235723a74c3c94c465119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0613 13:06:50.881974   39397 start.go:369] acquired machines lock for "newest-cni-802000" in 64.777µs
	I0613 13:06:50.882000   39397 start.go:96] Skipping create...Using existing machine configuration
	I0613 13:06:50.882009   39397 fix.go:54] fixHost starting: 
	I0613 13:06:50.882293   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:06:50.931167   39397 fix.go:102] recreateIfNeeded on newest-cni-802000: state=Stopped err=<nil>
	W0613 13:06:50.931197   39397 fix.go:128] unexpected machine state, will restart: <nil>
	I0613 13:06:50.952975   39397 out.go:177] * Restarting existing docker container for "newest-cni-802000" ...
	I0613 13:06:50.994632   39397 cli_runner.go:164] Run: docker start newest-cni-802000
	I0613 13:06:51.243991   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:06:51.296348   39397 kic.go:426] container "newest-cni-802000" state is running.
	I0613 13:06:51.296955   39397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-802000
	I0613 13:06:51.355305   39397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/config.json ...
	I0613 13:06:51.355749   39397 machine.go:88] provisioning docker machine ...
	I0613 13:06:51.355789   39397 ubuntu.go:169] provisioning hostname "newest-cni-802000"
	I0613 13:06:51.355881   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:51.417795   39397 main.go:141] libmachine: Using SSH client type: native
	I0613 13:06:51.418236   39397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60920 <nil> <nil>}
	I0613 13:06:51.418249   39397 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-802000 && echo "newest-cni-802000" | sudo tee /etc/hostname
	I0613 13:06:51.419480   39397 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0613 13:06:54.551445   39397 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-802000
	
	I0613 13:06:54.551539   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:54.601983   39397 main.go:141] libmachine: Using SSH client type: native
	I0613 13:06:54.602423   39397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60920 <nil> <nil>}
	I0613 13:06:54.602436   39397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-802000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-802000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-802000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0613 13:06:54.721569   39397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 13:06:54.721596   39397 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
	I0613 13:06:54.721619   39397 ubuntu.go:177] setting up certificates
	I0613 13:06:54.721627   39397 provision.go:83] configureAuth start
	I0613 13:06:54.721710   39397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-802000
	I0613 13:06:54.771051   39397 provision.go:138] copyHostCerts
	I0613 13:06:54.771161   39397 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
	I0613 13:06:54.771172   39397 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
	I0613 13:06:54.771329   39397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
	I0613 13:06:54.771539   39397 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
	I0613 13:06:54.771545   39397 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
	I0613 13:06:54.771617   39397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
	I0613 13:06:54.771784   39397 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
	I0613 13:06:54.771790   39397 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
	I0613 13:06:54.771858   39397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
	I0613 13:06:54.771992   39397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.newest-cni-802000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-802000]
	I0613 13:06:54.941605   39397 provision.go:172] copyRemoteCerts
	I0613 13:06:54.941673   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0613 13:06:54.941728   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:54.992711   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:06:55.081533   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0613 13:06:55.103623   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0613 13:06:55.126354   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0613 13:06:55.150973   39397 provision.go:86] duration metric: configureAuth took 429.316916ms
	I0613 13:06:55.159913   39397 ubuntu.go:193] setting minikube options for container-runtime
	I0613 13:06:55.160085   39397 config.go:182] Loaded profile config "newest-cni-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 13:06:55.160157   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:55.210542   39397 main.go:141] libmachine: Using SSH client type: native
	I0613 13:06:55.210919   39397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60920 <nil> <nil>}
	I0613 13:06:55.210933   39397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0613 13:06:55.328894   39397 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0613 13:06:55.328907   39397 ubuntu.go:71] root file system type: overlay
	I0613 13:06:55.329002   39397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0613 13:06:55.329094   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:55.378529   39397 main.go:141] libmachine: Using SSH client type: native
	I0613 13:06:55.378876   39397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60920 <nil> <nil>}
	I0613 13:06:55.378929   39397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0613 13:06:55.507759   39397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0613 13:06:55.507852   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:55.559563   39397 main.go:141] libmachine: Using SSH client type: native
	I0613 13:06:55.559902   39397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil>  [] 0s} 127.0.0.1 60920 <nil> <nil>}
	I0613 13:06:55.559918   39397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0613 13:06:55.684910   39397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0613 13:06:55.684928   39397 machine.go:91] provisioned docker machine in 4.329031723s
	I0613 13:06:55.684938   39397 start.go:300] post-start starting for "newest-cni-802000" (driver="docker")
	I0613 13:06:55.684948   39397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0613 13:06:55.685034   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0613 13:06:55.685102   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:55.734687   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:06:55.824042   39397 ssh_runner.go:195] Run: cat /etc/os-release
	I0613 13:06:55.828078   39397 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0613 13:06:55.828101   39397 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0613 13:06:55.828110   39397 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0613 13:06:55.828118   39397 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0613 13:06:55.828125   39397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
	I0613 13:06:55.828211   39397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
	I0613 13:06:55.828368   39397 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
	I0613 13:06:55.828528   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0613 13:06:55.837462   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
	I0613 13:06:55.859568   39397 start.go:303] post-start completed in 174.616043ms
	I0613 13:06:55.859672   39397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 13:06:55.859732   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:55.910862   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:06:55.996585   39397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0613 13:06:56.001880   39397 fix.go:56] fixHost completed within 5.119725252s
	I0613 13:06:56.001895   39397 start.go:83] releasing machines lock for "newest-cni-802000", held for 5.119767406s
	I0613 13:06:56.001970   39397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-802000
	I0613 13:06:56.051974   39397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0613 13:06:56.051983   39397 ssh_runner.go:195] Run: cat /version.json
	I0613 13:06:56.052048   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:56.052073   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:56.105934   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:06:56.105942   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:06:56.292992   39397 ssh_runner.go:195] Run: systemctl --version
	I0613 13:06:56.298160   39397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0613 13:06:56.303629   39397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0613 13:06:56.321543   39397 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0613 13:06:56.321614   39397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0613 13:06:56.331449   39397 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0613 13:06:56.331465   39397 start.go:464] detecting cgroup driver to use...
	I0613 13:06:56.331480   39397 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 13:06:56.331587   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 13:06:56.347531   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0613 13:06:56.358029   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0613 13:06:56.368134   39397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0613 13:06:56.368195   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0613 13:06:56.378436   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 13:06:56.388407   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0613 13:06:56.398410   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0613 13:06:56.408516   39397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0613 13:06:56.418294   39397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0613 13:06:56.428231   39397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0613 13:06:56.437034   39397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0613 13:06:56.445597   39397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:06:56.513514   39397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0613 13:06:56.590616   39397 start.go:464] detecting cgroup driver to use...
	I0613 13:06:56.590680   39397 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0613 13:06:56.590806   39397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0613 13:06:56.604113   39397 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0613 13:06:56.604189   39397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0613 13:06:56.618138   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0613 13:06:56.643864   39397 ssh_runner.go:195] Run: which cri-dockerd
	I0613 13:06:56.650857   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0613 13:06:56.665682   39397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0613 13:06:56.686237   39397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0613 13:06:56.811352   39397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0613 13:06:56.911572   39397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0613 13:06:56.911589   39397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0613 13:06:56.930184   39397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:06:57.002347   39397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0613 13:06:57.366441   39397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 13:06:57.426415   39397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0613 13:06:57.506530   39397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0613 13:06:57.581266   39397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:06:57.648380   39397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0613 13:06:57.661758   39397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0613 13:06:57.732182   39397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0613 13:06:57.808065   39397 start.go:511] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0613 13:06:57.808184   39397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0613 13:06:57.813083   39397 start.go:532] Will wait 60s for crictl version
	I0613 13:06:57.813151   39397 ssh_runner.go:195] Run: which crictl
	I0613 13:06:57.817651   39397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0613 13:06:57.866277   39397 start.go:548] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1
	I0613 13:06:57.866356   39397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 13:06:57.893266   39397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0613 13:06:57.963408   39397 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0613 13:06:57.963498   39397 cli_runner.go:164] Run: docker exec -t newest-cni-802000 dig +short host.docker.internal
	I0613 13:06:58.081672   39397 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0613 13:06:58.081812   39397 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0613 13:06:58.086895   39397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 13:06:58.098659   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:58.171310   39397 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0613 13:06:58.193294   39397 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 13:06:58.193387   39397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 13:06:58.215832   39397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0613 13:06:58.215855   39397 docker.go:566] Images already preloaded, skipping extraction
	I0613 13:06:58.215957   39397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0613 13:06:58.238348   39397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0613 13:06:58.238370   39397 cache_images.go:84] Images are preloaded, skipping loading
	I0613 13:06:58.238454   39397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0613 13:06:58.289032   39397 cni.go:84] Creating CNI manager for ""
	I0613 13:06:58.289049   39397 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:06:58.289071   39397 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0613 13:06:58.289095   39397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-802000 NodeName:newest-cni-802000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0613 13:06:58.289227   39397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-802000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0613 13:06:58.289299   39397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-802000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:newest-cni-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0613 13:06:58.289369   39397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0613 13:06:58.298608   39397 binaries.go:44] Found k8s binaries, skipping transfer
	I0613 13:06:58.298691   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0613 13:06:58.307303   39397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0613 13:06:58.323800   39397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0613 13:06:58.340526   39397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0613 13:06:58.357316   39397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0613 13:06:58.362193   39397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0613 13:06:58.373270   39397 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000 for IP: 192.168.76.2
	I0613 13:06:58.373293   39397 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:06:58.373452   39397 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
	I0613 13:06:58.373516   39397 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
	I0613 13:06:58.373612   39397 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/client.key
	I0613 13:06:58.373678   39397 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/apiserver.key.31bdca25
	I0613 13:06:58.373750   39397 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/proxy-client.key
	I0613 13:06:58.373979   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
	W0613 13:06:58.374025   39397 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
	I0613 13:06:58.374037   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
	I0613 13:06:58.374071   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
	I0613 13:06:58.374105   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
	I0613 13:06:58.374138   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
	I0613 13:06:58.374207   39397 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
	I0613 13:06:58.374820   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0613 13:06:58.397376   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0613 13:06:58.421895   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0613 13:06:58.444513   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/newest-cni-802000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0613 13:06:58.466628   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0613 13:06:58.488791   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0613 13:06:58.510835   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0613 13:06:58.532911   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0613 13:06:58.554647   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
	I0613 13:06:58.576708   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
	I0613 13:06:58.598829   39397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0613 13:06:58.620590   39397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0613 13:06:58.636984   39397 ssh_runner.go:195] Run: openssl version
	I0613 13:06:58.643157   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
	I0613 13:06:58.653163   39397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
	I0613 13:06:58.657586   39397 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
	I0613 13:06:58.657636   39397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
	I0613 13:06:58.664904   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
	I0613 13:06:58.674417   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0613 13:06:58.684074   39397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:06:58.688478   39397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:06:58.688522   39397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0613 13:06:58.695775   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0613 13:06:58.705052   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
	I0613 13:06:58.714733   39397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
	I0613 13:06:58.719169   39397 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
	I0613 13:06:58.719223   39397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
	I0613 13:06:58.726287   39397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
	I0613 13:06:58.735392   39397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0613 13:06:58.739678   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0613 13:06:58.746883   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0613 13:06:58.753754   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0613 13:06:58.760904   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0613 13:06:58.767986   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0613 13:06:58.775117   39397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0613 13:06:58.782649   39397 kubeadm.go:404] StartCluster: {Name:newest-cni-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-802000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 13:06:58.782767   39397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 13:06:58.803870   39397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0613 13:06:58.813347   39397 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0613 13:06:58.813361   39397 kubeadm.go:636] restartCluster start
	I0613 13:06:58.813433   39397 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0613 13:06:58.822712   39397 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:06:58.822812   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:06:58.875700   39397 kubeconfig.go:135] verify returned: extract IP: "newest-cni-802000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 13:06:58.875895   39397 kubeconfig.go:146] "newest-cni-802000" context is missing from /Users/jenkins/minikube-integration/15003-20351/kubeconfig - will repair!
	I0613 13:06:58.876205   39397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:06:58.877870   39397 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0613 13:06:58.888389   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:06:58.888461   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:06:58.900644   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:06:59.402383   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:06:59.402555   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:06:59.415227   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:06:59.902064   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:06:59.902320   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:06:59.914707   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:00.400877   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:00.400955   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:00.412093   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:00.902279   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:00.902461   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:00.915423   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:01.402795   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:01.402951   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:01.414461   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:01.900777   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:01.900879   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:01.911910   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:02.402010   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:02.402127   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:02.414620   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:02.900914   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:02.901087   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:02.913623   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:03.401825   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:03.401917   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:03.412691   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:03.901945   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:03.902130   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:03.914838   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:04.402829   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:04.403036   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:04.415575   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:04.900902   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:04.901011   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:04.911881   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:05.401059   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:05.401248   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:05.413554   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:05.902995   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:05.903130   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:05.915627   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:06.401066   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:06.401168   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:06.412321   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:06.901889   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:06.902093   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:06.914817   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:07.403038   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:07.403205   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:07.415909   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:07.900972   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:07.901068   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:07.912353   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:08.403009   39397 api_server.go:166] Checking apiserver status ...
	I0613 13:07:08.403177   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0613 13:07:08.415693   39397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:08.890785   39397 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0613 13:07:08.890825   39397 kubeadm.go:1128] stopping kube-system containers ...
	I0613 13:07:08.890914   39397 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0613 13:07:08.914886   39397 docker.go:462] Stopping containers: [65c335f88da8 692f812d82c4 3fd948469717 d222392b404b d4cc3ed2a8b3 d973bdafea65 b1ec0ff467ed 2eea506b3ba0 52b294e05a8d fe11de62d79a a6097df6db79 fbd432e84f5b 3a886d41b831 fbeebf774aa0 e98ff77941f7 56054d452e5b d09c718f0982]
	I0613 13:07:08.914970   39397 ssh_runner.go:195] Run: docker stop 65c335f88da8 692f812d82c4 3fd948469717 d222392b404b d4cc3ed2a8b3 d973bdafea65 b1ec0ff467ed 2eea506b3ba0 52b294e05a8d fe11de62d79a a6097df6db79 fbd432e84f5b 3a886d41b831 fbeebf774aa0 e98ff77941f7 56054d452e5b d09c718f0982
	I0613 13:07:08.937299   39397 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0613 13:07:08.949463   39397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0613 13:07:08.958406   39397 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 13 20:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 13 20:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun 13 20:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 13 20:06 /etc/kubernetes/scheduler.conf
	
	I0613 13:07:08.958464   39397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0613 13:07:08.967608   39397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0613 13:07:08.976458   39397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0613 13:07:08.985273   39397 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:08.985330   39397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0613 13:07:08.994200   39397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0613 13:07:09.003246   39397 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0613 13:07:09.003304   39397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0613 13:07:09.012210   39397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0613 13:07:09.021346   39397 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0613 13:07:09.021361   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:09.072092   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:09.696532   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:09.831132   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:09.883764   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:10.002649   39397 api_server.go:52] waiting for apiserver process to appear ...
	I0613 13:07:10.002728   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:07:10.514294   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:07:11.015142   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:07:11.027379   39397 api_server.go:72] duration metric: took 1.024699546s to wait for apiserver process to appear ...
	I0613 13:07:11.027393   39397 api_server.go:88] waiting for apiserver healthz status ...
	I0613 13:07:11.027410   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:11.028605   39397 api_server.go:269] stopped: https://127.0.0.1:60919/healthz: Get "https://127.0.0.1:60919/healthz": EOF
	I0613 13:07:11.529256   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:13.105797   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0613 13:07:13.105821   39397 api_server.go:103] status: https://127.0.0.1:60919/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 13:07:13.105832   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:13.112527   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0613 13:07:13.112556   39397 api_server.go:103] status: https://127.0.0.1:60919/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0613 13:07:13.529078   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:13.536181   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 13:07:13.536198   39397 api_server.go:103] status: https://127.0.0.1:60919/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 13:07:14.029411   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:14.035308   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0613 13:07:14.035327   39397 api_server.go:103] status: https://127.0.0.1:60919/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0613 13:07:14.528807   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:14.535468   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 200:
	ok
	I0613 13:07:14.600124   39397 api_server.go:141] control plane version: v1.27.2
	I0613 13:07:14.600151   39397 api_server.go:131] duration metric: took 3.572649046s to wait for apiserver health ...
	I0613 13:07:14.600169   39397 cni.go:84] Creating CNI manager for ""
	I0613 13:07:14.600186   39397 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 13:07:14.622809   39397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0613 13:07:14.643465   39397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0613 13:07:14.656702   39397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0613 13:07:14.700829   39397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0613 13:07:14.709006   39397 system_pods.go:59] 8 kube-system pods found
	I0613 13:07:14.709029   39397 system_pods.go:61] "coredns-5d78c9869d-dv7dm" [46de2092-96c2-47f9-9d1c-5ebc67c77b38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0613 13:07:14.709038   39397 system_pods.go:61] "etcd-newest-cni-802000" [17b38846-bed9-44f0-b8be-7ecc6af65d05] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0613 13:07:14.709046   39397 system_pods.go:61] "kube-apiserver-newest-cni-802000" [d29bb845-424c-4723-89e6-eb11aea455c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0613 13:07:14.709053   39397 system_pods.go:61] "kube-controller-manager-newest-cni-802000" [07c66b55-9ebb-4d2e-a086-389852770326] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0613 13:07:14.709060   39397 system_pods.go:61] "kube-proxy-96ltm" [75aea6d6-7574-4c0e-a374-d0aed18f4ef0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0613 13:07:14.709065   39397 system_pods.go:61] "kube-scheduler-newest-cni-802000" [a9a2f9cc-0400-4c4b-aab3-b4adcd9ec245] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0613 13:07:14.709073   39397 system_pods.go:61] "metrics-server-74d5c6b9c-48tbr" [5787a448-0210-4c71-a4a0-75d0833c9612] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0613 13:07:14.709081   39397 system_pods.go:61] "storage-provisioner" [9ee47715-9391-4205-9d97-2edb6f01b858] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0613 13:07:14.709085   39397 system_pods.go:74] duration metric: took 8.245365ms to wait for pod list to return data ...
	I0613 13:07:14.709090   39397 node_conditions.go:102] verifying NodePressure condition ...
	I0613 13:07:14.713413   39397 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0613 13:07:14.713433   39397 node_conditions.go:123] node cpu capacity is 6
	I0613 13:07:14.713444   39397 node_conditions.go:105] duration metric: took 4.349551ms to run NodePressure ...
	I0613 13:07:14.713464   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0613 13:07:15.128926   39397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0613 13:07:15.140585   39397 ops.go:34] apiserver oom_adj: -16
	I0613 13:07:15.163237   39397 kubeadm.go:640] restartCluster took 16.34934247s
	I0613 13:07:15.163251   39397 kubeadm.go:406] StartCluster complete in 16.380137028s
	I0613 13:07:15.163271   39397 settings.go:142] acquiring lock: {Name:mkafbfcc19c3ab5c202e867761622546d4c1b734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:07:15.163387   39397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 13:07:15.164419   39397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/kubeconfig: {Name:mk65ac2b4e7c257c263af78026d67e9b20b0f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 13:07:15.165036   39397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0613 13:07:15.165108   39397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0613 13:07:15.165237   39397 addons.go:66] Setting storage-provisioner=true in profile "newest-cni-802000"
	I0613 13:07:15.165263   39397 addons.go:228] Setting addon storage-provisioner=true in "newest-cni-802000"
	I0613 13:07:15.165268   39397 addons.go:66] Setting default-storageclass=true in profile "newest-cni-802000"
	W0613 13:07:15.165274   39397 addons.go:237] addon storage-provisioner should already be in state true
	I0613 13:07:15.165302   39397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-802000"
	I0613 13:07:15.165341   39397 addons.go:66] Setting dashboard=true in profile "newest-cni-802000"
	I0613 13:07:15.165362   39397 config.go:182] Loaded profile config "newest-cni-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 13:07:15.165348   39397 addons.go:66] Setting metrics-server=true in profile "newest-cni-802000"
	I0613 13:07:15.165377   39397 host.go:66] Checking if "newest-cni-802000" exists ...
	I0613 13:07:15.165402   39397 addons.go:228] Setting addon dashboard=true in "newest-cni-802000"
	I0613 13:07:15.165407   39397 addons.go:228] Setting addon metrics-server=true in "newest-cni-802000"
	W0613 13:07:15.165423   39397 addons.go:237] addon dashboard should already be in state true
	W0613 13:07:15.165429   39397 addons.go:237] addon metrics-server should already be in state true
	I0613 13:07:15.165514   39397 host.go:66] Checking if "newest-cni-802000" exists ...
	I0613 13:07:15.165530   39397 host.go:66] Checking if "newest-cni-802000" exists ...
	I0613 13:07:15.165883   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:07:15.165968   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:07:15.166059   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:07:15.166098   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:07:15.203211   39397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-802000" context rescaled to 1 replicas
	I0613 13:07:15.203264   39397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0613 13:07:15.222995   39397 out.go:177] * Verifying Kubernetes components...
	I0613 13:07:15.266168   39397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 13:07:15.296092   39397 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0613 13:07:15.317158   39397 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0613 13:07:15.306115   39397 addons.go:228] Setting addon default-storageclass=true in "newest-cni-802000"
	I0613 13:07:15.338104   39397 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 13:07:15.374951   39397 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0613 13:07:15.374951   39397 addons.go:237] addon default-storageclass should already be in state true
	I0613 13:07:15.396030   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0613 13:07:15.396044   39397 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0613 13:07:15.396052   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0613 13:07:15.396072   39397 host.go:66] Checking if "newest-cni-802000" exists ...
	I0613 13:07:15.396117   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:07:15.396121   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:07:15.417506   39397 cli_runner.go:164] Run: docker container inspect newest-cni-802000 --format={{.State.Status}}
	I0613 13:07:15.453947   39397 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0613 13:07:15.491073   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0613 13:07:15.491103   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0613 13:07:15.491206   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:07:15.501665   39397 start.go:872] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0613 13:07:15.501746   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:07:15.523368   39397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0613 13:07:15.523395   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0613 13:07:15.523519   39397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-802000
	I0613 13:07:15.526342   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:07:15.526343   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:07:15.568563   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:07:15.578581   39397 api_server.go:52] waiting for apiserver process to appear ...
	I0613 13:07:15.578668   39397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 13:07:15.592126   39397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60920 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/newest-cni-802000/id_rsa Username:docker}
	I0613 13:07:15.594128   39397 api_server.go:72] duration metric: took 390.815739ms to wait for apiserver process to appear ...
	I0613 13:07:15.594143   39397 api_server.go:88] waiting for apiserver healthz status ...
	I0613 13:07:15.594156   39397 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60919/healthz ...
	I0613 13:07:15.600445   39397 api_server.go:279] https://127.0.0.1:60919/healthz returned 200:
	ok
	I0613 13:07:15.602050   39397 api_server.go:141] control plane version: v1.27.2
	I0613 13:07:15.602061   39397 api_server.go:131] duration metric: took 7.911952ms to wait for apiserver health ...
	I0613 13:07:15.602066   39397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0613 13:07:15.608342   39397 system_pods.go:59] 8 kube-system pods found
	I0613 13:07:15.608360   39397 system_pods.go:61] "coredns-5d78c9869d-dv7dm" [46de2092-96c2-47f9-9d1c-5ebc67c77b38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0613 13:07:15.608367   39397 system_pods.go:61] "etcd-newest-cni-802000" [17b38846-bed9-44f0-b8be-7ecc6af65d05] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0613 13:07:15.608378   39397 system_pods.go:61] "kube-apiserver-newest-cni-802000" [d29bb845-424c-4723-89e6-eb11aea455c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0613 13:07:15.608384   39397 system_pods.go:61] "kube-controller-manager-newest-cni-802000" [07c66b55-9ebb-4d2e-a086-389852770326] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0613 13:07:15.608392   39397 system_pods.go:61] "kube-proxy-96ltm" [75aea6d6-7574-4c0e-a374-d0aed18f4ef0] Running
	I0613 13:07:15.608399   39397 system_pods.go:61] "kube-scheduler-newest-cni-802000" [a9a2f9cc-0400-4c4b-aab3-b4adcd9ec245] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0613 13:07:15.608407   39397 system_pods.go:61] "metrics-server-74d5c6b9c-48tbr" [5787a448-0210-4c71-a4a0-75d0833c9612] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0613 13:07:15.608413   39397 system_pods.go:61] "storage-provisioner" [9ee47715-9391-4205-9d97-2edb6f01b858] Running
	I0613 13:07:15.608417   39397 system_pods.go:74] duration metric: took 6.346307ms to wait for pod list to return data ...
	I0613 13:07:15.608423   39397 default_sa.go:34] waiting for default service account to be created ...
	I0613 13:07:15.611977   39397 default_sa.go:45] found service account: "default"
	I0613 13:07:15.611989   39397 default_sa.go:55] duration metric: took 3.561637ms for default service account to be created ...
	I0613 13:07:15.611996   39397 kubeadm.go:581] duration metric: took 408.68689ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0613 13:07:15.612008   39397 node_conditions.go:102] verifying NodePressure condition ...
	I0613 13:07:15.614849   39397 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0613 13:07:15.614862   39397 node_conditions.go:123] node cpu capacity is 6
	I0613 13:07:15.614871   39397 node_conditions.go:105] duration metric: took 2.859119ms to run NodePressure ...
	I0613 13:07:15.614880   39397 start.go:228] waiting for startup goroutines ...
	I0613 13:07:15.632017   39397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0613 13:07:15.632122   39397 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0613 13:07:15.632132   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0613 13:07:15.650680   39397 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0613 13:07:15.650694   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0613 13:07:15.669834   39397 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0613 13:07:15.669848   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0613 13:07:15.672628   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0613 13:07:15.672645   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0613 13:07:15.719071   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0613 13:07:15.719086   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0613 13:07:15.719778   39397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0613 13:07:15.719791   39397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0613 13:07:15.742575   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0613 13:07:15.742599   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0613 13:07:15.822989   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0613 13:07:15.823004   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0613 13:07:15.907037   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0613 13:07:15.907052   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0613 13:07:16.011330   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0613 13:07:16.011345   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0613 13:07:16.034140   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0613 13:07:16.034159   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0613 13:07:16.052380   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0613 13:07:16.052393   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0613 13:07:16.099968   39397 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0613 13:07:16.099981   39397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0613 13:07:16.119620   39397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0613 13:07:16.845942   39397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21385574s)
	I0613 13:07:16.846011   39397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.126150006s)
	I0613 13:07:16.908843   39397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189009111s)
	I0613 13:07:16.908869   39397 addons.go:464] Verifying addon metrics-server=true in "newest-cni-802000"
	I0613 13:07:17.284542   39397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.164857565s)
	I0613 13:07:17.306497   39397 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-802000 addons enable metrics-server	
	
	
	I0613 13:07:17.364598   39397 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0613 13:07:17.422526   39397 addons.go:499] enable addons completed in 2.257363369s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0613 13:07:17.422595   39397 start.go:233] waiting for cluster config update ...
	I0613 13:07:17.422635   39397 start.go:242] writing updated cluster config ...
	I0613 13:07:17.423323   39397 ssh_runner.go:195] Run: rm -f paused
	I0613 13:07:17.465307   39397 start.go:582] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0613 13:07:17.502642   39397 out.go:177] 
	W0613 13:07:17.539593   39397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0613 13:07:17.576756   39397 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0613 13:07:17.634492   39397 out.go:177] * Done! kubectl is now configured to use "newest-cni-802000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.767650479Z" level=info msg="Loading containers: start."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.856306269Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.893875182Z" level=info msg="Loading containers: done."
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902742638Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.902801729Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932004572Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:03 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:03.932050353Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:03 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.749280155Z" level=info msg="Processing signal 'terminated'"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750223589Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[699]: time="2023-06-13T19:45:11.750398147Z" level=info msg="Daemon shutdown complete"
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: docker.service: Deactivated successfully.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 13 19:45:11 old-k8s-version-554000 systemd[1]: Starting Docker Application Container Engine...
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.806954552Z" level=info msg="Starting up"
	Jun 13 19:45:11 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:11.952568721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.130460518Z" level=info msg="Loading containers: start."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.220890932Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.259079227Z" level=info msg="Loading containers: done."
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268343604Z" level=info msg="Docker daemon" commit=659604f graphdriver=overlay2 version=24.0.2
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.268406325Z" level=info msg="Daemon has completed initialization"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296356733Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 13 19:45:12 old-k8s-version-554000 dockerd[920]: time="2023-06-13T19:45:12.296512735Z" level=info msg="API listen on [::]:2376"
	Jun 13 19:45:12 old-k8s-version-554000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-06-13T20:09:25Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:09:25 up  3:08,  0 users,  load average: 0.36, 1.05, 1.05
	Linux old-k8s-version-554000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jun 13 20:09:24 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: I0613 20:09:24.438836   33308 server.go:410] Version: v1.16.0
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: I0613 20:09:24.439246   33308 plugins.go:100] No cloud provider specified.
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: I0613 20:09:24.439388   33308 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: I0613 20:09:24.441707   33308 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: W0613 20:09:24.442698   33308 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: W0613 20:09:24.442833   33308 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 20:09:24 old-k8s-version-554000 kubelet[33308]: F0613 20:09:24.442868   33308 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 20:09:24 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 20:09:24 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1437.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: I0613 20:09:25.212348   33342 server.go:410] Version: v1.16.0
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: I0613 20:09:25.212674   33342 plugins.go:100] No cloud provider specified.
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: I0613 20:09:25.212712   33342 server.go:773] Client rotation is on, will bootstrap in background
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: I0613 20:09:25.214501   33342 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: W0613 20:09:25.215159   33342 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: W0613 20:09:25.215234   33342 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 13 20:09:25 old-k8s-version-554000 kubelet[33342]: F0613 20:09:25.215295   33342 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1438.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 13 20:09:25 old-k8s-version-554000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 13:09:25.625280   39658 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 2 (355.387775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-554000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (417.79s)

                                                
                                    

Test pass (283/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.23
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.27.2/json-events 22.46
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.32
16 TestDownloadOnly/DeleteAll 0.61
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
18 TestDownloadOnlyKic 1.93
19 TestBinaryMirror 1.57
20 TestOffline 55.19
22 TestAddons/Setup 214.02
26 TestAddons/parallel/InspektorGadget 10.64
27 TestAddons/parallel/MetricsServer 5.7
28 TestAddons/parallel/HelmTiller 14.3
30 TestAddons/parallel/CSI 58.18
31 TestAddons/parallel/Headlamp 14.45
32 TestAddons/parallel/CloudSpanner 5.52
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 11.51
37 TestCertOptions 30.14
38 TestCertExpiration 254.13
39 TestDockerFlags 31.3
40 TestForceSystemdFlag 27.56
41 TestForceSystemdEnv 30.73
43 TestHyperKitDriverInstallOrUpdate 7.14
46 TestErrorSpam/setup 25.19
47 TestErrorSpam/start 1.96
48 TestErrorSpam/status 1.13
49 TestErrorSpam/pause 1.64
50 TestErrorSpam/unpause 1.77
51 TestErrorSpam/stop 11.46
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 39.83
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 41
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.08
62 TestFunctional/serial/CacheCmd/cache/add_remote 6.41
63 TestFunctional/serial/CacheCmd/cache/add_local 1.5
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
65 TestFunctional/serial/CacheCmd/cache/list 0.07
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.49
68 TestFunctional/serial/CacheCmd/cache/delete 0.15
69 TestFunctional/serial/MinikubeKubectlCmd 0.58
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.75
71 TestFunctional/serial/ExtraConfig 38.12
72 TestFunctional/serial/ComponentHealth 0.06
73 TestFunctional/serial/LogsCmd 3.1
74 TestFunctional/serial/LogsFileCmd 3.11
75 TestFunctional/serial/InvalidService 4.22
77 TestFunctional/parallel/ConfigCmd 0.42
78 TestFunctional/parallel/DashboardCmd 22.6
79 TestFunctional/parallel/DryRun 1.68
80 TestFunctional/parallel/InternationalLanguage 0.85
81 TestFunctional/parallel/StatusCmd 1.13
86 TestFunctional/parallel/AddonsCmd 0.29
87 TestFunctional/parallel/PersistentVolumeClaim 27.08
89 TestFunctional/parallel/SSHCmd 0.75
90 TestFunctional/parallel/CpCmd 1.78
91 TestFunctional/parallel/MySQL 39.96
92 TestFunctional/parallel/FileSync 0.44
93 TestFunctional/parallel/CertSync 2.53
97 TestFunctional/parallel/NodeLabels 0.07
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
101 TestFunctional/parallel/License 0.87
102 TestFunctional/parallel/Version/short 0.09
103 TestFunctional/parallel/Version/components 1.08
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
107 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
108 TestFunctional/parallel/ImageCommands/ImageBuild 3.09
109 TestFunctional/parallel/ImageCommands/Setup 3.12
110 TestFunctional/parallel/DockerEnv/bash 1.79
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.29
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.81
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.09
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.73
118 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.84
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.97
121 TestFunctional/parallel/ServiceCmd/DeployApp 17.28
122 TestFunctional/parallel/ServiceCmd/List 0.42
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
124 TestFunctional/parallel/ServiceCmd/HTTPS 15
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.16
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
136 TestFunctional/parallel/ServiceCmd/Format 15
137 TestFunctional/parallel/ServiceCmd/URL 15
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
139 TestFunctional/parallel/ProfileCmd/profile_list 0.44
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
141 TestFunctional/parallel/MountCmd/any-port 9.54
142 TestFunctional/parallel/MountCmd/specific-port 2.55
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.65
144 TestFunctional/delete_addon-resizer_images 0.14
145 TestFunctional/delete_my-image_image 0.05
146 TestFunctional/delete_minikube_cached_images 0.05
150 TestImageBuild/serial/Setup 25.32
151 TestImageBuild/serial/NormalBuild 2.15
152 TestImageBuild/serial/BuildWithBuildArg 0.84
153 TestImageBuild/serial/BuildWithDockerIgnore 0.66
154 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
164 TestJSONOutput/start/Command 40.32
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.6
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.61
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.86
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.7
189 TestKicCustomNetwork/create_custom_network 26.86
190 TestKicCustomNetwork/use_default_bridge_network 27.46
191 TestKicExistingNetwork 26.98
192 TestKicCustomSubnet 27.12
193 TestKicStaticIP 27.52
194 TestMainNoArgs 0.06
195 TestMinikubeProfile 56.19
198 TestMountStart/serial/StartWithMountFirst 8.15
199 TestMountStart/serial/VerifyMountFirst 0.36
200 TestMountStart/serial/StartWithMountSecond 7.75
201 TestMountStart/serial/VerifyMountSecond 0.36
202 TestMountStart/serial/DeleteFirst 2.04
203 TestMountStart/serial/VerifyMountPostDelete 0.35
204 TestMountStart/serial/Stop 1.53
205 TestMountStart/serial/RestartStopped 9.1
206 TestMountStart/serial/VerifyMountPostStop 0.36
209 TestMultiNode/serial/FreshStart2Nodes 71.22
210 TestMultiNode/serial/DeployApp2Nodes 45.16
211 TestMultiNode/serial/PingHostFrom2Pods 0.84
212 TestMultiNode/serial/AddNode 18.68
213 TestMultiNode/serial/ProfileList 0.39
214 TestMultiNode/serial/CopyFile 13.07
215 TestMultiNode/serial/StopNode 2.83
216 TestMultiNode/serial/StartAfterStop 13.3
217 TestMultiNode/serial/RestartKeepsNodes 120.69
218 TestMultiNode/serial/DeleteNode 5.86
219 TestMultiNode/serial/StopMultiNode 21.75
220 TestMultiNode/serial/RestartMultiNode 57.56
221 TestMultiNode/serial/ValidateNameConflict 28.78
225 TestPreload 211.36
227 TestScheduledStopUnix 98.58
228 TestSkaffold 122.88
230 TestInsufficientStorage 13.9
246 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 20.78
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 15.82
248 TestStoppedBinaryUpgrade/Setup 4.52
250 TestStoppedBinaryUpgrade/MinikubeLogs 3.54
252 TestPause/serial/Start 76.76
253 TestPause/serial/SecondStartNoReconfiguration 38.89
254 TestPause/serial/Pause 0.62
255 TestPause/serial/VerifyStatus 0.37
256 TestPause/serial/Unpause 0.71
257 TestPause/serial/PauseAgain 0.8
258 TestPause/serial/DeletePaused 2.51
259 TestPause/serial/VerifyDeletedResources 0.51
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
270 TestNoKubernetes/serial/StartWithStopK8s 10.14
271 TestNoKubernetes/serial/Start 7.67
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
273 TestNoKubernetes/serial/ProfileList 34.76
274 TestNoKubernetes/serial/Stop 1.51
275 TestNoKubernetes/serial/StartNoArgs 7.98
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
277 TestNetworkPlugins/group/auto/Start 39.75
278 TestNetworkPlugins/group/auto/KubeletFlags 0.39
279 TestNetworkPlugins/group/auto/NetCatPod 13.24
280 TestNetworkPlugins/group/auto/DNS 0.14
281 TestNetworkPlugins/group/auto/Localhost 0.12
282 TestNetworkPlugins/group/auto/HairPin 0.11
283 TestNetworkPlugins/group/kindnet/Start 54.28
284 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
285 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
286 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
287 TestNetworkPlugins/group/kindnet/DNS 0.13
288 TestNetworkPlugins/group/kindnet/Localhost 0.12
289 TestNetworkPlugins/group/kindnet/HairPin 0.12
290 TestNetworkPlugins/group/calico/Start 70.96
291 TestNetworkPlugins/group/calico/ControllerPod 5.02
292 TestNetworkPlugins/group/custom-flannel/Start 54.63
293 TestNetworkPlugins/group/calico/KubeletFlags 0.51
294 TestNetworkPlugins/group/calico/NetCatPod 13.37
295 TestNetworkPlugins/group/calico/DNS 0.13
296 TestNetworkPlugins/group/calico/Localhost 0.13
297 TestNetworkPlugins/group/calico/HairPin 0.18
298 TestNetworkPlugins/group/false/Start 42.37
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.2
301 TestNetworkPlugins/group/custom-flannel/DNS 0.13
302 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
303 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
304 TestNetworkPlugins/group/false/KubeletFlags 0.38
305 TestNetworkPlugins/group/false/NetCatPod 13.26
306 TestNetworkPlugins/group/enable-default-cni/Start 41.26
307 TestNetworkPlugins/group/false/DNS 0.16
308 TestNetworkPlugins/group/false/Localhost 0.15
309 TestNetworkPlugins/group/false/HairPin 0.15
310 TestNetworkPlugins/group/flannel/Start 54.76
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.22
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
316 TestNetworkPlugins/group/bridge/Start 49.04
317 TestNetworkPlugins/group/flannel/ControllerPod 5.02
318 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
319 TestNetworkPlugins/group/flannel/NetCatPod 13.3
320 TestNetworkPlugins/group/flannel/DNS 0.15
321 TestNetworkPlugins/group/flannel/Localhost 0.12
322 TestNetworkPlugins/group/flannel/HairPin 0.12
323 TestNetworkPlugins/group/kubenet/Start 41.96
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
325 TestNetworkPlugins/group/bridge/NetCatPod 12.34
326 TestNetworkPlugins/group/bridge/DNS 0.13
327 TestNetworkPlugins/group/bridge/Localhost 0.12
328 TestNetworkPlugins/group/bridge/HairPin 0.12
331 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
332 TestNetworkPlugins/group/kubenet/NetCatPod 14.41
333 TestNetworkPlugins/group/kubenet/DNS 0.14
334 TestNetworkPlugins/group/kubenet/Localhost 0.14
335 TestNetworkPlugins/group/kubenet/HairPin 0.13
337 TestStartStop/group/no-preload/serial/FirstStart 66.2
338 TestStartStop/group/no-preload/serial/DeployApp 9.29
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
340 TestStartStop/group/no-preload/serial/Stop 10.94
341 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.36
342 TestStartStop/group/no-preload/serial/SecondStart 582.16
345 TestStartStop/group/old-k8s-version/serial/Stop 1.49
346 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.35
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
349 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.41
351 TestStartStop/group/no-preload/serial/Pause 3.14
353 TestStartStop/group/embed-certs/serial/FirstStart 79.01
354 TestStartStop/group/embed-certs/serial/DeployApp 11.3
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
356 TestStartStop/group/embed-certs/serial/Stop 11.04
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.35
358 TestStartStop/group/embed-certs/serial/SecondStart 334.77
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
363 TestStartStop/group/embed-certs/serial/Pause 2.99
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.35
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 312.24
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.41
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
377 TestStartStop/group/newest-cni/serial/FirstStart 38.98
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
380 TestStartStop/group/newest-cni/serial/Stop 5.85
381 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
382 TestStartStop/group/newest-cni/serial/SecondStart 28.04
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
386 TestStartStop/group/newest-cni/serial/Pause 2.96
x
+
TestDownloadOnly/v1.16.0/json-events (23.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (23.224909222s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-210000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-210000: exit status 85 (278.994123ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-210000 | jenkins | v1.30.1 | 13 Jun 23 11:42 PDT |          |
	|         | -p download-only-210000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 11:42:13
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 11:42:13.774758   20802 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:42:13.774931   20802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:42:13.774936   20802 out.go:309] Setting ErrFile to fd 2...
	I0613 11:42:13.774940   20802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:42:13.775052   20802 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	W0613 11:42:13.775155   20802 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15003-20351/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15003-20351/.minikube/config/config.json: no such file or directory
	I0613 11:42:13.776737   20802 out.go:303] Setting JSON to true
	I0613 11:42:13.796501   20802 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6104,"bootTime":1686675629,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 11:42:13.796581   20802 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 11:42:13.818507   20802 out.go:97] [download-only-210000] minikube v1.30.1 on Darwin 13.4
	I0613 11:42:13.818704   20802 notify.go:220] Checking for updates...
	W0613 11:42:13.818738   20802 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball: no such file or directory
	I0613 11:42:13.840031   20802 out.go:169] MINIKUBE_LOCATION=15003
	I0613 11:42:13.861472   20802 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 11:42:13.883402   20802 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 11:42:13.905343   20802 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 11:42:13.927305   20802 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	W0613 11:42:13.969061   20802 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0613 11:42:13.969619   20802 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 11:42:14.026507   20802 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 11:42:14.026616   20802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:42:14.121325   20802 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:42:14.110039539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:42:14.143338   20802 out.go:97] Using the docker driver based on user configuration
	I0613 11:42:14.143380   20802 start.go:297] selected driver: docker
	I0613 11:42:14.143391   20802 start.go:884] validating driver "docker" against <nil>
	I0613 11:42:14.143619   20802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:42:14.239828   20802 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:42:14.228393464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:42:14.240010   20802 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0613 11:42:14.243833   20802 start_flags.go:382] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0613 11:42:14.244324   20802 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0613 11:42:14.265722   20802 out.go:169] Using Docker Desktop driver with root privileges
	I0613 11:42:14.287458   20802 cni.go:84] Creating CNI manager for ""
	I0613 11:42:14.287494   20802 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0613 11:42:14.287511   20802 start_flags.go:319] config:
	{Name:download-only-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:42:14.309524   20802 out.go:97] Starting control plane node download-only-210000 in cluster download-only-210000
	I0613 11:42:14.309652   20802 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 11:42:14.331374   20802 out.go:97] Pulling base image ...
	I0613 11:42:14.331495   20802 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 11:42:14.331586   20802 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 11:42:14.381526   20802 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0613 11:42:14.381805   20802 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0613 11:42:14.381926   20802 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0613 11:42:14.444251   20802 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0613 11:42:14.444283   20802 cache.go:57] Caching tarball of preloaded images
	I0613 11:42:14.445358   20802 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 11:42:14.466825   20802 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0613 11:42:14.466943   20802 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:14.673298   20802 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0613 11:42:30.022494   20802 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0613 11:42:30.902827   20802 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:30.902970   20802 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:31.444111   20802 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0613 11:42:31.444318   20802 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/download-only-210000/config.json ...
	I0613 11:42:31.444345   20802 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/download-only-210000/config.json: {Name:mkfc9f29abbd36ce3c77e220eb47788eec81a03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0613 11:42:31.444717   20802 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0613 11:42:31.445055   20802 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-210000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (22.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=docker : (22.461526632s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (22.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-210000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-210000: exit status 85 (323.172161ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-210000 | jenkins | v1.30.1 | 13 Jun 23 11:42 PDT |          |
	|         | -p download-only-210000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-210000 | jenkins | v1.30.1 | 13 Jun 23 11:42 PDT |          |
	|         | -p download-only-210000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/13 11:42:37
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0613 11:42:37.281297   20840 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:42:37.281462   20840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:42:37.281468   20840 out.go:309] Setting ErrFile to fd 2...
	I0613 11:42:37.281472   20840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:42:37.281579   20840 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	W0613 11:42:37.281670   20840 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15003-20351/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15003-20351/.minikube/config/config.json: no such file or directory
	I0613 11:42:37.282893   20840 out.go:303] Setting JSON to true
	I0613 11:42:37.301871   20840 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6128,"bootTime":1686675629,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 11:42:37.301958   20840 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 11:42:37.323449   20840 out.go:97] [download-only-210000] minikube v1.30.1 on Darwin 13.4
	I0613 11:42:37.323738   20840 notify.go:220] Checking for updates...
	I0613 11:42:37.345234   20840 out.go:169] MINIKUBE_LOCATION=15003
	I0613 11:42:37.366691   20840 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 11:42:37.388593   20840 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 11:42:37.410163   20840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 11:42:37.431534   20840 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	W0613 11:42:37.475337   20840 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0613 11:42:37.476006   20840 config.go:182] Loaded profile config "download-only-210000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0613 11:42:37.476085   20840 start.go:792] api.Load failed for download-only-210000: filestore "download-only-210000": Docker machine "download-only-210000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0613 11:42:37.476268   20840 driver.go:373] Setting default libvirt URI to qemu:///system
	W0613 11:42:37.476296   20840 start.go:792] api.Load failed for download-only-210000: filestore "download-only-210000": Docker machine "download-only-210000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0613 11:42:37.532900   20840 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 11:42:37.533006   20840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:42:37.626329   20840 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:42:37.614993018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:42:37.648103   20840 out.go:97] Using the docker driver based on existing profile
	I0613 11:42:37.648205   20840 start.go:297] selected driver: docker
	I0613 11:42:37.648215   20840 start.go:884] validating driver "docker" against &{Name:download-only-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-210000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0613 11:42:37.648502   20840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:42:37.741740   20840 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:42:37.731533445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:42:37.744547   20840 cni.go:84] Creating CNI manager for ""
	I0613 11:42:37.744565   20840 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0613 11:42:37.744582   20840 start_flags.go:319] config:
	{Name:download-only-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:42:37.767699   20840 out.go:97] Starting control plane node download-only-210000 in cluster download-only-210000
	I0613 11:42:37.767809   20840 cache.go:122] Beginning downloading kic base image for docker with docker
	I0613 11:42:37.789216   20840 out.go:97] Pulling base image ...
	I0613 11:42:37.789341   20840 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 11:42:37.789396   20840 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0613 11:42:37.839092   20840 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0613 11:42:37.839235   20840 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0613 11:42:37.839256   20840 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory, skipping pull
	I0613 11:42:37.839262   20840 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in cache, skipping pull
	I0613 11:42:37.839284   20840 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0613 11:42:37.874012   20840 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 11:42:37.874082   20840 cache.go:57] Caching tarball of preloaded images
	I0613 11:42:37.875111   20840 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 11:42:37.897398   20840 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0613 11:42:37.897477   20840 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:38.099258   20840 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4?checksum=md5:1858f4460df043b5f83c3d1ea676dbc0 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0613 11:42:54.129856   20840 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:54.130077   20840 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0613 11:42:54.731877   20840 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0613 11:42:54.731958   20840 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/download-only-210000/config.json ...
	I0613 11:42:54.732331   20840 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0613 11:42:54.732564   20840 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/darwin/amd64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-210000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.61s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.61s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-210000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-086000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-086000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-086000
--- PASS: TestDownloadOnlyKic (1.93s)

                                                
                                    
x
+
TestBinaryMirror (1.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-224000 --alsologtostderr --binary-mirror http://127.0.0.1:55297 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-224000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-224000
--- PASS: TestBinaryMirror (1.57s)

                                                
                                    
x
+
TestOffline (55.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-761000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-761000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (52.282856261s)
helpers_test.go:175: Cleaning up "offline-docker-761000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-761000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-761000: (2.908548963s)
--- PASS: TestOffline (55.19s)

                                                
                                    
x
+
TestAddons/Setup (214.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-054000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-054000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m34.024508722s)
--- PASS: TestAddons/Setup (214.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vd8kh" [d9c12f13-0795-425e-83a3-210299d5fe6c] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011509653s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-054000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-054000: (5.630804253s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.09396ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-8lw7l" [9a967188-4f6a-4833-a498-19b0d1a3fa8d] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01072813s
addons_test.go:391: (dbg) Run:  kubectl --context addons-054000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-054000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 4.85652ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-vcws4" [ea65f2ca-77a0-4530-8978-35f50fc314bb] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013516497s
addons_test.go:449: (dbg) Run:  kubectl --context addons-054000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-054000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.77523069s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-054000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.115182ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-054000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-054000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6ece0152-513f-479e-8f34-4ea4cedcb3ac] Pending
helpers_test.go:344: "task-pv-pod" [6ece0152-513f-479e-8f34-4ea4cedcb3ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6ece0152-513f-479e-8f34-4ea4cedcb3ac] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.00863136s
addons_test.go:560: (dbg) Run:  kubectl --context addons-054000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-054000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-054000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-054000 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-054000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-054000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-054000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c67e80c3-9455-430d-99d0-290408faad7e] Pending
helpers_test.go:344: "task-pv-pod-restore" [c67e80c3-9455-430d-99d0-290408faad7e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c67e80c3-9455-430d-99d0-290408faad7e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.01069529s
addons_test.go:602: (dbg) Run:  kubectl --context addons-054000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-054000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-054000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-054000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-054000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.479571585s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-054000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-054000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-054000 --alsologtostderr -v=1: (1.442846658s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-b44v8" [7bcbfaff-0f38-4dd6-bca1-0242d8981f6d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-b44v8" [7bcbfaff-0f38-4dd6-bca1-0242d8981f6d] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.010624996s
--- PASS: TestAddons/parallel/Headlamp (14.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-5tspd" [6c1fd189-0969-408c-83a1-2fa8682ec168] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010269677s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-054000
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-054000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-054000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-054000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-054000: (10.996383411s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-054000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-054000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-054000
--- PASS: TestAddons/StoppedEnableDisable (11.51s)

                                                
                                    
x
+
TestCertOptions (30.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-953000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-953000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (26.896247882s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-953000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-953000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-953000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-953000: (2.477771751s)
--- PASS: TestCertOptions (30.14s)

                                                
                                    
x
+
TestCertExpiration (254.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-367000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-367000 --memory=2048 --cert-expiration=3m --driver=docker : (50.214997809s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-367000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0613 12:26:00.484509   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-367000 --memory=2048 --cert-expiration=8760h --driver=docker : (21.408570753s)
helpers_test.go:175: Cleaning up "cert-expiration-367000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-367000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-367000: (2.50559393s)
--- PASS: TestCertExpiration (254.13s)

                                                
                                    
x
+
TestDockerFlags (31.3s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-016000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-016000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (27.550570923s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-016000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-016000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-016000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-016000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-016000: (2.921409631s)
--- PASS: TestDockerFlags (31.30s)

                                                
                                    
x
+
TestForceSystemdFlag (27.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-409000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E0613 12:21:42.272525   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-409000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (24.384422813s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-409000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-409000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-409000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-409000: (2.729414715s)
--- PASS: TestForceSystemdFlag (27.56s)

                                                
                                    
x
+
TestForceSystemdEnv (30.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-497000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0613 12:21:38.842840   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-497000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (27.150622606s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-497000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-497000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-497000: (3.11434392s)
--- PASS: TestForceSystemdEnv (30.73s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.14s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.14s)

                                                
                                    
x
+
TestErrorSpam/setup (25.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-774000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-774000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 --driver=docker : (25.191923571s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (25.19s)

                                                
                                    
x
+
TestErrorSpam/start (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 start --dry-run
--- PASS: TestErrorSpam/start (1.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (11.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 stop: (10.867859892s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-774000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-774000 stop
--- PASS: TestErrorSpam/stop (11.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/test/nested/copy/20800/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-216000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (39.83145133s)
--- PASS: TestFunctional/serial/StartWithProxy (39.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-216000 --alsologtostderr -v=8: (40.997109083s)
functional_test.go:659: soft start took 40.997557191s for "functional-216000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-216000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:3.1: (2.084739808s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:3.3: (2.377830078s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 cache add registry.k8s.io/pause:latest: (1.947384252s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2111436181/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache add minikube-local-cache-test:functional-216000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 cache add minikube-local-cache-test:functional-216000: (1.078428787s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache delete minikube-local-cache-test:functional-216000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-216000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (370.076233ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 cache reload: (1.34817565s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 kubectl -- --context functional-216000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-216000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-216000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.115605027s)
functional_test.go:757: restart took 38.115770446s for "functional-216000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-216000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 logs: (3.101833848s)
--- PASS: TestFunctional/serial/LogsCmd (3.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2867305245/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2867305245/001/logs.txt: (3.113517557s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-216000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-216000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-216000: exit status 115 (535.700514ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32190 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-216000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 config get cpus: exit status 14 (44.735908ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 config get cpus: exit status 14 (43.400943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-216000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-216000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22947: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-216000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (958.177249ms)

                                                
                                                
-- stdout --
	* [functional-216000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 11:53:09.760402   22832 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:53:09.760718   22832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:53:09.760729   22832 out.go:309] Setting ErrFile to fd 2...
	I0613 11:53:09.760736   22832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:53:09.760899   22832 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 11:53:09.782278   22832 out.go:303] Setting JSON to false
	I0613 11:53:09.803344   22832 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6760,"bootTime":1686675629,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 11:53:09.803433   22832 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 11:53:09.851211   22832 out.go:177] * [functional-216000] minikube v1.30.1 on Darwin 13.4
	I0613 11:53:09.909434   22832 notify.go:220] Checking for updates...
	I0613 11:53:09.930244   22832 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 11:53:09.993286   22832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 11:53:10.072379   22832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 11:53:10.130466   22832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 11:53:10.151440   22832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 11:53:10.193563   22832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 11:53:10.217549   22832 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 11:53:10.217932   22832 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 11:53:10.279007   22832 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 11:53:10.279144   22832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:53:10.389862   22832 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 18:53:10.37448792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Path
:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<n
il>}}
	I0613 11:53:10.473748   22832 out.go:177] * Using the docker driver based on existing profile
	I0613 11:53:10.515906   22832 start.go:297] selected driver: docker
	I0613 11:53:10.515927   22832 start.go:884] validating driver "docker" against &{Name:functional-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:53:10.516071   22832 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 11:53:10.560785   22832 out.go:177] 
	W0613 11:53:10.583221   22832 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0613 11:53:10.604766   22832 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-216000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-216000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (849.922113ms)

                                                
                                                
-- stdout --
	* [functional-216000] minikube v1.30.1 sur Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 11:53:11.404298   22898 out.go:296] Setting OutFile to fd 1 ...
	I0613 11:53:11.404466   22898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:53:11.404471   22898 out.go:309] Setting ErrFile to fd 2...
	I0613 11:53:11.404475   22898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 11:53:11.404585   22898 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 11:53:11.406130   22898 out.go:303] Setting JSON to false
	I0613 11:53:11.425961   22898 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6762,"bootTime":1686675629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0613 11:53:11.426047   22898 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0613 11:53:11.447506   22898 out.go:177] * [functional-216000] minikube v1.30.1 sur Darwin 13.4
	I0613 11:53:11.490669   22898 notify.go:220] Checking for updates...
	I0613 11:53:11.512226   22898 out.go:177]   - MINIKUBE_LOCATION=15003
	I0613 11:53:11.554343   22898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	I0613 11:53:11.596431   22898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0613 11:53:11.638272   22898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0613 11:53:11.701124   22898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	I0613 11:53:11.745347   22898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0613 11:53:11.766731   22898 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 11:53:11.767356   22898 driver.go:373] Setting default libvirt URI to qemu:///system
	I0613 11:53:11.874932   22898 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
	I0613 11:53:11.875146   22898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0613 11:53:12.001652   22898 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:66 SystemTime:2023-06-13 18:53:11.988127583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
	I0613 11:53:12.044954   22898 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0613 11:53:12.089139   22898 start.go:297] selected driver: docker
	I0613 11:53:12.089169   22898 start.go:884] validating driver "docker" against &{Name:functional-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0613 11:53:12.089338   22898 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0613 11:53:12.133843   22898 out.go:177] 
	W0613 11:53:12.154877   22898 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0613 11:53:12.175977   22898 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2ac5fd5f-f40a-45c1-9887-068209c12dae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011245047s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-216000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-216000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-216000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-216000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6e14b7d8-0e47-4ddc-b228-c580a323adc3] Pending
helpers_test.go:344: "sp-pod" [6e14b7d8-0e47-4ddc-b228-c580a323adc3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6e14b7d8-0e47-4ddc-b228-c580a323adc3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.010176675s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-216000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-216000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-216000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [268a4a50-76c7-451f-9e40-9a78dbca847c] Pending
helpers_test.go:344: "sp-pod" [268a4a50-76c7-451f-9e40-9a78dbca847c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [268a4a50-76c7-451f-9e40-9a78dbca847c] Running
E0613 11:53:00.855760   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008471656s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-216000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh -n functional-216000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 cp functional-216000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2489440495/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh -n functional-216000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-216000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-xscx2" [aa19da00-fbe5-4fa9-b78c-b4182ecbb930] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0613 11:51:44.050545   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
helpers_test.go:344: "mysql-7db894d786-xscx2" [aa19da00-fbe5-4fa9-b78c-b4182ecbb930] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 34.016812623s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;": exit status 1 (156.153556ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;": exit status 1 (118.198908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;": exit status 1 (118.21688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0613 11:52:19.893771   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-216000 exec mysql-7db894d786-xscx2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20800/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /etc/test/nested/copy/20800/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20800.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /etc/ssl/certs/20800.pem"
E0613 11:51:39.247713   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:39.567899   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20800.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /usr/share/ca-certificates/20800.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0613 11:51:40.208532   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /etc/ssl/certs/208002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /etc/ssl/certs/208002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/208002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /usr/share/ca-certificates/208002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0613 11:51:41.490176   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-216000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh "sudo systemctl is-active crio": exit status 1 (524.153848ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 version -o=json --components: (1.082425504s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-216000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-216000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-216000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-216000 image ls --format short --alsologtostderr:
I0613 11:53:26.017703   23174 out.go:296] Setting OutFile to fd 1 ...
I0613 11:53:26.018444   23174 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:26.018456   23174 out.go:309] Setting ErrFile to fd 2...
I0613 11:53:26.018466   23174 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:26.018735   23174 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:53:26.020178   23174 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:26.020272   23174 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:26.020642   23174 cli_runner.go:164] Run: docker container inspect functional-216000 --format={{.State.Status}}
I0613 11:53:26.076615   23174 ssh_runner.go:195] Run: systemctl --version
I0613 11:53:26.076701   23174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-216000
I0613 11:53:26.128593   23174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55876 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/functional-216000/id_rsa Username:docker}
I0613 11:53:26.219634   23174 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-216000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-216000 | c18df9cb9034d | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | ac2b7465ebba9 | 112MB  |
| docker.io/library/mysql                     | 5.7               | dd6675b5cfea1 | 569MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-216000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.27.2           | c5b13e4f7806d | 121MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | b8aa50768fd67 | 71.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | alpine            | fe7edaf8a8dcf | 41.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-216000 | 946a55c07d2ed | 30B    |
| docker.io/library/nginx                     | latest            | 7d3c40f240e18 | 143MB  |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 89e70da428d29 | 58.4MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-216000 image ls --format table --alsologtostderr:
I0613 11:53:29.974705   23214 out.go:296] Setting OutFile to fd 1 ...
I0613 11:53:29.974877   23214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:29.974882   23214 out.go:309] Setting ErrFile to fd 2...
I0613 11:53:29.974887   23214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:29.975002   23214 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:53:29.975604   23214 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:29.975692   23214 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:29.976062   23214 cli_runner.go:164] Run: docker container inspect functional-216000 --format={{.State.Status}}
I0613 11:53:30.026548   23214 ssh_runner.go:195] Run: systemctl --version
I0613 11:53:30.026623   23214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-216000
I0613 11:53:30.079754   23214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55876 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/functional-216000/id_rsa Username:docker}
I0613 11:53:30.164042   23214 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/06/13 11:53:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-216000 image ls --format json --alsologtostderr:
[{"id":"c18df9cb9034d5eaecf0ddff53c6cdd56b4e7fa5140e3d99f5ec1d38efcbd3af","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-216000"],"size":"1240000"},{"id":"7d3c40f240e18f6b440bf06b1dfd8a9c48a49c1dfe3400772c3b378739cbdc47","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"143000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-216000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"71100000"},{"id":"e6f1816883972d4b
e47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"946a55c07d2edae621118e4e54c28a2c9062033ff0d07aa03fdab9cc0cf55c7d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-216000"],"size":"30"},{"id":"fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"121000000"},{"id":"89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"58400000"},{"id":"dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c2
4228c2e18a4bdf","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"112000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.
k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-216000 image ls --format json --alsologtostderr:
I0613 11:53:29.700474   23208 out.go:296] Setting OutFile to fd 1 ...
I0613 11:53:29.700649   23208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:29.700656   23208 out.go:309] Setting ErrFile to fd 2...
I0613 11:53:29.700660   23208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:29.700772   23208 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:53:29.701454   23208 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:29.701548   23208 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:29.701932   23208 cli_runner.go:164] Run: docker container inspect functional-216000 --format={{.State.Status}}
I0613 11:53:29.752584   23208 ssh_runner.go:195] Run: systemctl --version
I0613 11:53:29.752656   23208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-216000
I0613 11:53:29.804537   23208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55876 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/functional-216000/id_rsa Username:docker}
I0613 11:53:29.887192   23208 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-216000 image ls --format yaml --alsologtostderr:
- id: fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 946a55c07d2edae621118e4e54c28a2c9062033ff0d07aa03fdab9cc0cf55c7d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-216000
size: "30"
- id: b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "71100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "112000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-216000
size: "32900000"
- id: c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "121000000"
- id: 89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "58400000"
- id: dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7d3c40f240e18f6b440bf06b1dfd8a9c48a49c1dfe3400772c3b378739cbdc47
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "143000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-216000 image ls --format yaml --alsologtostderr:
I0613 11:53:26.312812   23180 out.go:296] Setting OutFile to fd 1 ...
I0613 11:53:26.312989   23180 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:26.312995   23180 out.go:309] Setting ErrFile to fd 2...
I0613 11:53:26.312999   23180 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:26.313114   23180 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:53:26.313751   23180 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:26.313845   23180 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:26.314241   23180 cli_runner.go:164] Run: docker container inspect functional-216000 --format={{.State.Status}}
I0613 11:53:26.369890   23180 ssh_runner.go:195] Run: systemctl --version
I0613 11:53:26.370023   23180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-216000
I0613 11:53:26.422638   23180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55876 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/functional-216000/id_rsa Username:docker}
I0613 11:53:26.521537   23180 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh pgrep buildkitd: exit status 1 (411.262426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image build -t localhost/my-image:functional-216000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image build -t localhost/my-image:functional-216000 testdata/build --alsologtostderr: (2.400715836s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-216000 image build -t localhost/my-image:functional-216000 testdata/build --alsologtostderr:
I0613 11:53:27.024870   23196 out.go:296] Setting OutFile to fd 1 ...
I0613 11:53:27.025039   23196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:27.025044   23196 out.go:309] Setting ErrFile to fd 2...
I0613 11:53:27.025049   23196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:53:27.025165   23196 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:53:27.025803   23196 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:27.047052   23196 config.go:182] Loaded profile config "functional-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0613 11:53:27.047465   23196 cli_runner.go:164] Run: docker container inspect functional-216000 --format={{.State.Status}}
I0613 11:53:27.101938   23196 ssh_runner.go:195] Run: systemctl --version
I0613 11:53:27.102012   23196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-216000
I0613 11:53:27.159756   23196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55876 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/functional-216000/id_rsa Username:docker}
I0613 11:53:27.286096   23196 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3392805889.tar
I0613 11:53:27.286195   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0613 11:53:27.296193   23196 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3392805889.tar
I0613 11:53:27.301450   23196 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3392805889.tar: stat -c "%s %y" /var/lib/minikube/build/build.3392805889.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3392805889.tar': No such file or directory
I0613 11:53:27.301501   23196 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3392805889.tar --> /var/lib/minikube/build/build.3392805889.tar (3072 bytes)
I0613 11:53:27.358320   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3392805889
I0613 11:53:27.370981   23196 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3392805889 -xf /var/lib/minikube/build/build.3392805889.tar
I0613 11:53:27.381460   23196 docker.go:339] Building image: /var/lib/minikube/build/build.3392805889
I0613 11:53:27.381558   23196 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-216000 /var/lib/minikube/build/build.3392805889
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c18df9cb9034d5eaecf0ddff53c6cdd56b4e7fa5140e3d99f5ec1d38efcbd3af done
#8 naming to localhost/my-image:functional-216000 done
#8 DONE 0.0s
I0613 11:53:29.326836   23196 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-216000 /var/lib/minikube/build/build.3392805889: (1.94520532s)
I0613 11:53:29.326901   23196 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3392805889
I0613 11:53:29.336152   23196 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3392805889.tar
I0613 11:53:29.345425   23196 build_images.go:207] Built localhost/my-image:functional-216000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3392805889.tar
I0613 11:53:29.345449   23196 build_images.go:123] succeeded building to: functional-216000
I0613 11:53:29.345453   23196 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.060738205s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-216000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-216000 docker-env) && out/minikube-darwin-amd64 status -p functional-216000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-216000 docker-env) && out/minikube-darwin-amd64 status -p functional-216000": (1.126465651s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-216000 docker-env) && docker images"
E0613 11:51:38.926670   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:38.932657   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:38.942978   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:38.964860   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:39.004999   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:51:39.085143   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr: (4.011396375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr: (2.520371717s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0613 11:51:49.170974   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.126441432s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-216000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image load --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr: (4.516806072s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image save gcr.io/google-containers/addon-resizer:functional-216000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image save gcr.io/google-containers/addon-resizer:functional-216000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.731254304s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image rm gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
E0613 11:51:59.412285   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.523454908s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-216000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 image save --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-216000 image save --daemon gcr.io/google-containers/addon-resizer:functional-216000 --alsologtostderr: (2.856639761s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-216000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-216000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-216000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-rntxq" [ae2cee7a-3adc-47fc-a5ef-fd4a7a1aa69e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-rntxq" [ae2cee7a-3adc-47fc-a5ef-fd4a7a1aa69e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.009595072s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 service list -o json
functional_test.go:1493: Took "485.069902ms" to run "out/minikube-darwin-amd64 -p functional-216000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 service --namespace=default --https --url hello-node: signal: killed (15.004046924s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56124

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:56124
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 22653: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-216000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [80ecc32c-1e2b-4c48-bafe-0e8e00592c8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [80ecc32c-1e2b-4c48-bafe-0e8e00592c8e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.009837262s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-216000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-216000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 22683: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 service hello-node --url --format={{.IP}}: signal: killed (15.001492914s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 service hello-node --url: signal: killed (15.003160892s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56190

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:56190
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "374.344774ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "63.751847ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "436.527481ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "68.328088ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2456702144/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686682389268335000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2456702144/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686682389268335000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2456702144/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686682389268335000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2456702144/001/test-1686682389268335000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (398.309784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 13 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 13 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 13 18:53 test-1686682389268335000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh cat /mount-9p/test-1686682389268335000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-216000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [13d5e0b3-35a9-4567-9423-f9972a5990ec] Pending
helpers_test.go:344: "busybox-mount" [13d5e0b3-35a9-4567-9423-f9972a5990ec] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [13d5e0b3-35a9-4567-9423-f9972a5990ec] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [13d5e0b3-35a9-4567-9423-f9972a5990ec] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.021537368s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-216000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2456702144/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1013215262/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.311706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1013215262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh "sudo umount -f /mount-9p": exit status 1 (353.627308ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-216000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1013215262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T" /mount1: exit status 1 (591.489723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-216000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-216000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-216000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2824096420/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-216000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-216000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-216000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (25.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-793000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-793000 --driver=docker : (25.315128106s)
--- PASS: TestImageBuild/serial/Setup (25.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-793000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-793000: (2.149467963s)
--- PASS: TestImageBuild/serial/NormalBuild (2.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-793000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-793000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-793000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-809000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0613 12:02:10.071322   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-809000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (40.317230241s)
--- PASS: TestJSONOutput/start/Command (40.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-809000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-809000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-809000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-809000 --output=json --user=testUser: (5.855942213s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-241000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-241000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (346.183618ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7eb41aaa-b433-4553-82a9-c26d495f1fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-241000] minikube v1.30.1 on Darwin 13.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c053199-968b-4908-b3da-a972f76b1e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15003"}}
	{"specversion":"1.0","id":"48704c6e-c616-4ec9-966e-b4b69996c896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig"}}
	{"specversion":"1.0","id":"6e8ce46a-0b94-43b0-b2ef-703d7768f583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5593c8d5-4ecc-4c68-8cb2-a9553e303033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"44fce454-19bb-4b67-bfc6-cac8386f13c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube"}}
	{"specversion":"1.0","id":"9a564d2f-0beb-4848-b16a-763555763ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7d20c26-9222-48f1-851b-f9a746e8536a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-241000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-241000
--- PASS: TestErrorJSONOutput (0.70s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-643000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-643000 --network=: (24.269719245s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-643000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-643000: (2.542831252s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.86s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-686000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-686000 --network=bridge: (25.062680548s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-686000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-686000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-686000: (2.341259178s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.46s)

                                                
                                    
x
+
TestKicExistingNetwork (26.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-904000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-904000 --network=existing-network: (24.307376011s)
helpers_test.go:175: Cleaning up "existing-network-904000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-904000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-904000: (2.335937082s)
--- PASS: TestKicExistingNetwork (26.98s)

                                                
                                    
x
+
TestKicCustomSubnet (27.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-127000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-127000 --subnet=192.168.60.0/24: (24.597629448s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-127000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-127000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-127000: (2.468442398s)
--- PASS: TestKicCustomSubnet (27.12s)

                                                
                                    
x
+
TestKicStaticIP (27.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-915000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-915000 --static-ip=192.168.200.200: (24.868215564s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-915000 ip
helpers_test.go:175: Cleaning up "static-ip-915000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-915000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-915000: (2.437478435s)
--- PASS: TestKicStaticIP (27.52s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (56.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-369000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-369000 --driver=docker : (24.920953306s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-371000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-371000 --driver=docker : (24.764696641s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-369000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-371000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-371000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-371000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-371000: (2.478380601s)
helpers_test.go:175: Cleaning up "first-369000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-369000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-369000: (2.479766177s)
--- PASS: TestMinikubeProfile (56.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-712000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-712000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.151848697s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-712000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-725000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-725000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.751831635s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-725000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.04s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-712000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-712000 --alsologtostderr -v=5: (2.041388351s)
--- PASS: TestMountStart/serial/DeleteFirst (2.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-725000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-725000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-725000: (1.53130126s)
--- PASS: TestMountStart/serial/Stop (1.53s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-725000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-725000: (8.101634672s)
--- PASS: TestMountStart/serial/RestartStopped (9.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-725000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-715000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0613 12:06:38.878869   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:06:42.308272   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-715000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m10.514622066s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (45.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-715000 -- rollout status deployment/busybox: (3.706082435s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0613 12:08:01.929498   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-9xq96 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-xtbbx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-9xq96 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-xtbbx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-9xq96 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-xtbbx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (45.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-9xq96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-9xq96 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-xtbbx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-715000 -- exec busybox-67b7f59bb-xtbbx -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-715000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-715000 -v 3 --alsologtostderr: (17.755737495s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.68s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp testdata/cp-test.txt multinode-715000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3995549459/001/cp-test_multinode-715000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000:/home/docker/cp-test.txt multinode-715000-m02:/home/docker/cp-test_multinode-715000_multinode-715000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test_multinode-715000_multinode-715000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000:/home/docker/cp-test.txt multinode-715000-m03:/home/docker/cp-test_multinode-715000_multinode-715000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test_multinode-715000_multinode-715000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp testdata/cp-test.txt multinode-715000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3995549459/001/cp-test_multinode-715000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m02:/home/docker/cp-test.txt multinode-715000:/home/docker/cp-test_multinode-715000-m02_multinode-715000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test_multinode-715000-m02_multinode-715000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m02:/home/docker/cp-test.txt multinode-715000-m03:/home/docker/cp-test_multinode-715000-m02_multinode-715000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test_multinode-715000-m02_multinode-715000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp testdata/cp-test.txt multinode-715000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3995549459/001/cp-test_multinode-715000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m03:/home/docker/cp-test.txt multinode-715000:/home/docker/cp-test_multinode-715000-m03_multinode-715000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000 "sudo cat /home/docker/cp-test_multinode-715000-m03_multinode-715000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 cp multinode-715000-m03:/home/docker/cp-test.txt multinode-715000-m02:/home/docker/cp-test_multinode-715000-m03_multinode-715000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 ssh -n multinode-715000-m02 "sudo cat /home/docker/cp-test_multinode-715000-m03_multinode-715000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-715000 node stop m03: (1.476799232s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-715000 status: exit status 7 (671.47785ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-715000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-715000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr: exit status 7 (685.567409ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-715000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-715000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:09:06.666106   26295 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:09:06.666286   26295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:09:06.666292   26295 out.go:309] Setting ErrFile to fd 2...
	I0613 12:09:06.666296   26295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:09:06.666414   26295 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:09:06.666620   26295 out.go:303] Setting JSON to false
	I0613 12:09:06.666640   26295 mustload.go:65] Loading cluster: multinode-715000
	I0613 12:09:06.666696   26295 notify.go:220] Checking for updates...
	I0613 12:09:06.666912   26295 config.go:182] Loaded profile config "multinode-715000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:09:06.666926   26295 status.go:255] checking status of multinode-715000 ...
	I0613 12:09:06.667320   26295 cli_runner.go:164] Run: docker container inspect multinode-715000 --format={{.State.Status}}
	I0613 12:09:06.717861   26295 status.go:330] multinode-715000 host status = "Running" (err=<nil>)
	I0613 12:09:06.718003   26295 host.go:66] Checking if "multinode-715000" exists ...
	I0613 12:09:06.718257   26295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-715000
	I0613 12:09:06.770482   26295 host.go:66] Checking if "multinode-715000" exists ...
	I0613 12:09:06.770783   26295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:09:06.770849   26295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-715000
	I0613 12:09:06.821583   26295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56704 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/multinode-715000/id_rsa Username:docker}
	I0613 12:09:06.908151   26295 ssh_runner.go:195] Run: systemctl --version
	I0613 12:09:06.913560   26295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:09:06.924737   26295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-715000
	I0613 12:09:06.975397   26295 kubeconfig.go:92] found "multinode-715000" server: "https://127.0.0.1:56708"
	I0613 12:09:06.975421   26295 api_server.go:166] Checking apiserver status ...
	I0613 12:09:06.975463   26295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0613 12:09:06.987337   26295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2210/cgroup
	W0613 12:09:06.997309   26295 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2210/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0613 12:09:06.997369   26295 ssh_runner.go:195] Run: ls
	I0613 12:09:07.002093   26295 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56708/healthz ...
	I0613 12:09:07.008273   26295 api_server.go:279] https://127.0.0.1:56708/healthz returned 200:
	ok
	I0613 12:09:07.008288   26295 status.go:421] multinode-715000 apiserver status = Running (err=<nil>)
	I0613 12:09:07.008298   26295 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0613 12:09:07.008309   26295 status.go:255] checking status of multinode-715000-m02 ...
	I0613 12:09:07.008573   26295 cli_runner.go:164] Run: docker container inspect multinode-715000-m02 --format={{.State.Status}}
	I0613 12:09:07.059459   26295 status.go:330] multinode-715000-m02 host status = "Running" (err=<nil>)
	I0613 12:09:07.059481   26295 host.go:66] Checking if "multinode-715000-m02" exists ...
	I0613 12:09:07.059755   26295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-715000-m02
	I0613 12:09:07.111725   26295 host.go:66] Checking if "multinode-715000-m02" exists ...
	I0613 12:09:07.112034   26295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0613 12:09:07.112103   26295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-715000-m02
	I0613 12:09:07.162289   26295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56744 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/multinode-715000-m02/id_rsa Username:docker}
	I0613 12:09:07.246588   26295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0613 12:09:07.257552   26295 status.go:257] multinode-715000-m02 status: &{Name:multinode-715000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0613 12:09:07.257570   26295 status.go:255] checking status of multinode-715000-m03 ...
	I0613 12:09:07.257855   26295 cli_runner.go:164] Run: docker container inspect multinode-715000-m03 --format={{.State.Status}}
	I0613 12:09:07.308319   26295 status.go:330] multinode-715000-m03 host status = "Stopped" (err=<nil>)
	I0613 12:09:07.308338   26295 status.go:343] host is not running, skipping remaining checks
	I0613 12:09:07.308345   26295 status.go:257] multinode-715000-m03 status: &{Name:multinode-715000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-715000 node start m03 --alsologtostderr: (12.32672924s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-715000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-715000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-715000: (23.022074408s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr: (1m37.578904041s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-715000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-715000 node delete m03: (5.013910727s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 stop
E0613 12:11:38.877277   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:11:42.305088   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-715000 stop: (21.46919023s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-715000 status: exit status 7 (142.859695ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-715000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr: exit status 7 (140.855183ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-715000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0613 12:11:48.812995   26756 out.go:296] Setting OutFile to fd 1 ...
	I0613 12:11:48.813202   26756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:11:48.813207   26756 out.go:309] Setting ErrFile to fd 2...
	I0613 12:11:48.813211   26756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0613 12:11:48.813319   26756 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
	I0613 12:11:48.813510   26756 out.go:303] Setting JSON to false
	I0613 12:11:48.813531   26756 mustload.go:65] Loading cluster: multinode-715000
	I0613 12:11:48.813583   26756 notify.go:220] Checking for updates...
	I0613 12:11:48.813822   26756 config.go:182] Loaded profile config "multinode-715000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0613 12:11:48.813835   26756 status.go:255] checking status of multinode-715000 ...
	I0613 12:11:48.814219   26756 cli_runner.go:164] Run: docker container inspect multinode-715000 --format={{.State.Status}}
	I0613 12:11:48.864010   26756 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0613 12:11:48.864026   26756 status.go:343] host is not running, skipping remaining checks
	I0613 12:11:48.864032   26756 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0613 12:11:48.864056   26756 status.go:255] checking status of multinode-715000-m02 ...
	I0613 12:11:48.864324   26756 cli_runner.go:164] Run: docker container inspect multinode-715000-m02 --format={{.State.Status}}
	I0613 12:11:48.913048   26756 status.go:330] multinode-715000-m02 host status = "Stopped" (err=<nil>)
	I0613 12:11:48.913067   26756 status.go:343] host is not running, skipping remaining checks
	I0613 12:11:48.913076   26756 status.go:257] multinode-715000-m02 status: &{Name:multinode-715000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr --driver=docker : (56.737151809s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-715000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-715000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-715000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-715000-m02 --driver=docker : exit status 14 (387.646519ms)

                                                
                                                
-- stdout --
	* [multinode-715000-m02] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-715000-m02' is duplicated with machine name 'multinode-715000-m02' in profile 'multinode-715000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-715000-m03 --driver=docker 
E0613 12:13:05.360563   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-715000-m03 --driver=docker : (25.405813346s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-715000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-715000: exit status 80 (448.127406ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-715000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-715000-m03 already exists in multinode-715000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-715000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-715000-m03: (2.495760746s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.78s)

                                                
                                    
x
+
TestPreload (211.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-281000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-281000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m57.070731653s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-281000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-281000 image pull gcr.io/k8s-minikube/busybox: (2.440286139s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-281000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-281000: (10.79932391s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-281000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0613 12:16:38.871995   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:16:42.301064   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-281000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m18.190089643s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-281000 image list
helpers_test.go:175: Cleaning up "test-preload-281000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-281000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-281000: (2.580209419s)
--- PASS: TestPreload (211.36s)

                                                
                                    
x
+
TestScheduledStopUnix (98.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-935000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-935000 --memory=2048 --driver=docker : (24.570487234s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-935000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-935000 -n scheduled-stop-935000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-935000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-935000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-935000 -n scheduled-stop-935000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-935000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-935000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-935000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-935000: exit status 7 (97.094502ms)

                                                
                                                
-- stdout --
	scheduled-stop-935000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-935000 -n scheduled-stop-935000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-935000 -n scheduled-stop-935000: exit status 7 (92.064709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-935000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-935000: (2.236672418s)
--- PASS: TestScheduledStopUnix (98.58s)

                                                
                                    
x
+
TestSkaffold (122.88s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1223398586 version
skaffold_test.go:63: skaffold version: v2.5.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-986000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-986000 --memory=2600 --driver=docker : (24.538566703s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1223398586 run --minikube-profile skaffold-986000 --kube-context skaffold-986000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1223398586 run --minikube-profile skaffold-986000 --kube-context skaffold-986000 --status-check=true --port-forward=false --interactive=false: (1m16.57842504s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-869f946946-q2b5w" [7d5908b1-97f4-44b8-99fd-ee56c4feaa77] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013252884s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6d79bb79b7-flnk4" [270dcecd-de7b-4b08-9739-c72e11388a11] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007629606s
helpers_test.go:175: Cleaning up "skaffold-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-986000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-986000: (3.215930073s)
--- PASS: TestSkaffold (122.88s)

                                                
                                    
x
+
TestInsufficientStorage (13.9s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-839000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-839000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (10.906309795s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0dc642a-aac6-4a0f-bc80-283a8e335f1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-839000] minikube v1.30.1 on Darwin 13.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"60125416-2824-44dc-89e0-e376773ba029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15003"}}
	{"specversion":"1.0","id":"68c33cfe-8d09-483a-b300-26b28baecffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig"}}
	{"specversion":"1.0","id":"91609139-c2d9-4715-8383-edbed707bc35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3ca40f51-aa4f-4e43-89a0-e228783b7a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8308ecc0-69c4-4081-badf-6fbc18249a33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube"}}
	{"specversion":"1.0","id":"984c9fea-28fe-469e-9ec1-47fa22d208de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"00dcc394-3a16-4fa0-87c4-f2b9ef41df65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d5fa0077-7135-4240-a292-17353f240eda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f2e77ab4-25e4-4a50-8af0-a4880eb3a9e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4d072ac-1cd1-4874-9e5d-16359b1c86d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"0c1bae1e-bd18-489d-a8be-0e1720cda211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-839000 in cluster insufficient-storage-839000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"521029ab-6b98-4290-8dfd-0dfc446cffa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d6dd966-e13b-47c0-a1de-bc99c4780af1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"39a81c64-dcab-490b-9647-b68f0f459463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-839000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-839000 --output=json --layout=cluster: exit status 7 (354.463876ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-839000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-839000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:20:44.019909   28293 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-839000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-839000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-839000 --output=json --layout=cluster: exit status 7 (354.073876ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-839000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-839000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0613 12:20:44.374581   28303 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-839000" does not appear in /Users/jenkins/minikube-integration/15003-20351/kubeconfig
	E0613 12:20:44.385039   28303 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/insufficient-storage-839000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-839000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-839000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-839000: (2.283488659s)
--- PASS: TestInsufficientStorage (13.90s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.78s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=15003
- KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2065393202/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2065393202/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2065393202/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2065393202/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (15.82s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=15003
- KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1819715595/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1819715595/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1819715595/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1819715595/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (15.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-326000
version_upgrade_test.go:218: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-326000: (3.540318992s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                    
x
+
TestPause/serial/Start (76.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-879000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0613 12:28:03.362073   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-879000 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m16.761884538s)
--- PASS: TestPause/serial/Start (76.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-879000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-879000 --alsologtostderr -v=1 --driver=docker : (38.869082763s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-879000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-879000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-879000 --output=json --layout=cluster: exit status 2 (369.324656ms)

                                                
                                                
-- stdout --
	{"Name":"pause-879000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-879000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-879000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-879000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.51s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-879000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-879000 --alsologtostderr -v=5: (2.507926282s)
--- PASS: TestPause/serial/DeletePaused (2.51s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-879000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-879000: exit status 1 (59.945981ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-879000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (384.22769ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-906000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=15003
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --driver=docker : (7.052104025s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-906000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-906000 status -o json: exit status 2 (438.641932ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-906000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-906000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-906000: (2.644400625s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-906000 --no-kubernetes --driver=docker : (7.666880372s)
--- PASS: TestNoKubernetes/serial/Start (7.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-906000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-906000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.250296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0613 12:30:19.511746   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.23245248s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
E0613 12:30:47.200698   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (18.526617383s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-906000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-906000: (1.509905186s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-906000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-906000 --driver=docker : (7.975992807s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-906000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-906000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.255141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0613 12:31:38.829189   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:31:42.258618   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (39.745204209s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sm6tj" [e2678544-eb0a-4d4d-a3e9-3d27af0cc915] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-sm6tj" [e2678544-eb0a-4d4d-a3e9-3d27af0cc915] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.009859688s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (54.279602434s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-748fq" [a831529c-d1e5-4f59-9105-ecfa718ea392] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014892251s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k4b7k" [80596931-ad37-4856-a18c-75eb3ca4a318] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-k4b7k" [80596931-ad37-4856-a18c-75eb3ca4a318] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010055673s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m10.957679193s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cb6x4" [fa9842e8-e156-49e1-8a46-1a8d53720405] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018456249s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (54.627740068s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-84k44" [42f50dab-7616-4946-8762-3073c1ed7fc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-84k44" [42f50dab-7616-4946-8762-3073c1ed7fc1] Running
E0613 12:35:19.504861   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.007791283s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (42.366112543s)
--- PASS: TestNetworkPlugins/group/false/Start (42.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-ng64j" [b6d5b2b9-5151-4a2d-b8c3-fb260b3a0264] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-ng64j" [b6d5b2b9-5151-4a2d-b8c3-fb260b3a0264] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.009091376s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rsztw" [64a6ec95-ad9e-4b36-85f6-53c96f799ab3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rsztw" [64a6ec95-ad9e-4b36-85f6-53c96f799ab3] Running
E0613 12:36:38.828006   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.007318057s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
E0613 12:36:42.258170   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (41.254919752s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (54.755296913s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-54qgc" [8a4860a2-3378-415a-a944-ebf802d71b00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0613 12:37:27.310087   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-54qgc" [8a4860a2-3378-415a-a944-ebf802d71b00] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00924855s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (49.036664969s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gttv8" [d3dc6c15-43db-4ba3-bc1e-7aaa53931d2e] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019785928s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-185000 replace --force -f testdata/netcat-deployment.yaml
E0613 12:38:08.270022   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f4r2b" [79fb84dc-bfc2-4a9d-9551-c78e05c03545] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f4r2b" [79fb84dc-bfc2-4a9d-9551-c78e05c03545] Running
E0613 12:38:15.661810   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.668177   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.680346   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.700907   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.741024   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.822622   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:15.982727   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:16.304892   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:16.945167   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:18.268556   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:38:20.829969   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.007108652s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (41.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-185000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (41.961639457s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (41.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f8hj9" [fd01aaff-b8f4-41c6-ac5c-f5b5ad49de77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f8hj9" [fd01aaff-b8f4-41c6-ac5c-f5b5ad49de77] Running
E0613 12:38:56.672470   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.009410392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-185000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-185000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jpxsp" [c14a9cae-edae-4700-9876-22b22e9d38d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0613 12:39:30.188555   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-jpxsp" [c14a9cae-edae-4700-9876-22b22e9d38d2] Running
E0613 12:39:37.631787   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.141728079s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-185000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-185000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-874000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.2
E0613 12:40:05.790311   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:05.795903   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:05.806262   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:05.826349   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:05.866557   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:05.946678   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:06.107637   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:06.427715   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:07.069768   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:08.349890   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:10.910245   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:16.030251   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:19.505264   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:40:26.270240   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:46.750619   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:40:59.552277   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:41:05.882026   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:05.888069   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:05.900258   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:05.920393   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:05.961117   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:06.041528   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:06.201828   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:06.522249   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:07.162962   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:08.443750   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-874000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.2: (1m6.20317429s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-874000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [558672aa-044d-4afd-adff-e609ae7c2123] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0613 12:41:11.003891   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [558672aa-044d-4afd-adff-e609ae7c2123] Running
E0613 12:41:16.124113   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.01829489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-874000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-874000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-874000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-874000 --alsologtostderr -v=3
E0613 12:41:21.874181   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:41:26.364213   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:27.710387   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-874000 --alsologtostderr -v=3: (10.943638359s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-874000 -n no-preload-874000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-874000 -n no-preload-874000: exit status 7 (93.637063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-874000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (582.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-874000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.2
E0613 12:41:31.665030   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.671335   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.681444   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.701670   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.743825   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.826007   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:31.986710   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:32.308808   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:32.949542   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:34.229660   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:36.790442   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:38.821832   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:41:41.911259   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:41:42.251090   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 12:41:42.554381   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/skaffold-986000/client.crt: no such file or directory
E0613 12:41:46.283482   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
E0613 12:41:46.844348   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:41:52.151178   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:42:12.630960   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:42:14.026811   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
E0613 12:42:23.567832   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.574226   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.585071   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.606332   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.647078   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.727537   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:23.887615   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:24.208312   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:24.849125   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:26.129435   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:27.803752   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 12:42:28.689518   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:33.809674   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:44.049673   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:42:49.628790   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
E0613 12:42:53.591320   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:43:02.580535   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.586933   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.599120   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.621306   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.661458   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.741779   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:02.902293   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:03.223072   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:03.863311   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:04.529944   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
E0613 12:43:05.143430   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:07.703746   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:12.824709   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:43:15.655677   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
E0613 12:43:23.066724   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-874000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.2: (9m41.777847278s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-874000 -n no-preload-874000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (582.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-554000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-554000 --alsologtostderr -v=3: (1.490309505s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (90.395089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-554000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4nlsv" [d166d90f-fcf6-4e2e-8caf-2c31bd9d365c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014125633s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4nlsv" [d166d90f-fcf6-4e2e-8caf-2c31bd9d365c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009568308s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-874000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-874000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-874000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-874000 -n no-preload-874000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-874000 -n no-preload-874000: exit status 2 (370.034822ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-874000 -n no-preload-874000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-874000 -n no-preload-874000: exit status 2 (389.284671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-874000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-874000 -n no-preload-874000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-874000 -n no-preload-874000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-550000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.2
E0613 12:51:31.827826   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/false-185000/client.crt: no such file or directory
E0613 12:51:38.985789   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 12:51:42.416286   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 12:51:46.447982   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
E0613 12:52:23.735887   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/enable-default-cni-185000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-550000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.2: (1m19.01114814s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-550000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e55e159-afb0-4cd7-bfeb-d47adea977bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e55e159-afb0-4cd7-bfeb-d47adea977bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.015534686s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-550000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-550000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-550000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-550000 --alsologtostderr -v=3
E0613 12:53:02.747610   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/flannel-185000/client.crt: no such file or directory
E0613 12:53:09.555562   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-550000 --alsologtostderr -v=3: (11.038654019s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-550000 -n embed-certs-550000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-550000 -n embed-certs-550000: exit status 7 (91.337321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-550000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-550000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.2
E0613 12:53:15.826274   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/kindnet-185000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-550000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.2: (5m34.361045539s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-550000 -n embed-certs-550000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kwkz4" [8278caa8-ea45-4be9-b292-359b587cec6c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0613 12:58:48.489455   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
E0613 12:58:53.999032   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kwkz4" [8278caa8-ea45-4be9-b292-359b587cec6c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.012210949s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kwkz4" [8278caa8-ea45-4be9-b292-359b587cec6c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006479965s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-550000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-550000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-550000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-550000 -n embed-certs-550000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-550000 -n embed-certs-550000: exit status 2 (380.593706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-550000 -n embed-certs-550000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-550000 -n embed-certs-550000: exit status 2 (373.247705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-550000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-550000 -n embed-certs-550000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-550000 -n embed-certs-550000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-690000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-690000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.2: (49.999936206s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-690000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5b07c19-0f88-4d60-9a22-0ecb5d7900fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0613 13:00:05.967367   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/calico-185000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d5b07c19-0f88-4d60-9a22-0ecb5d7900fa] Running
E0613 13:00:11.536946   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/bridge-185000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.013112228s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-690000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-690000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-690000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-690000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-690000 --alsologtostderr -v=3: (10.916480866s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000: exit status 7 (92.611338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-690000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-690000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-690000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.2: (5m11.853124167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xljld" [ea8dcde6-e2ab-4f9d-b976-a9c68548f246] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xljld" [ea8dcde6-e2ab-4f9d-b976-a9c68548f246] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.012936405s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xljld" [ea8dcde6-e2ab-4f9d-b976-a9c68548f246] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.035987987s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-690000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-690000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-690000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000: exit status 2 (378.395829ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000: exit status 2 (374.582615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-690000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-690000 -n default-k8s-diff-port-690000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-802000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.2
E0613 13:06:06.068544   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/custom-flannel-185000/client.crt: no such file or directory
E0613 13:06:10.163117   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/no-preload-874000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-802000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.2: (38.976324703s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-802000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-802000 --alsologtostderr -v=3
E0613 13:06:46.473528   20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/auto-185000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-802000 --alsologtostderr -v=3: (5.847660862s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-802000 -n newest-cni-802000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-802000 -n newest-cni-802000: exit status 7 (93.733378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-802000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (28.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-802000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-802000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.2: (27.644854671s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-802000 -n newest-cni-802000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (28.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-802000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-802000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-802000 -n newest-cni-802000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-802000 -n newest-cni-802000: exit status 2 (377.500613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-802000 -n newest-cni-802000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-802000 -n newest-cni-802000: exit status 2 (373.711275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-802000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-802000 -n newest-cni-802000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-802000 -n newest-cni-802000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    

Test skip (18/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 16.534119ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dsg6c" [3d738bbf-d1b5-47c9-a7fa-78e6da5d1bc6] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010253032s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ntjnt" [e72cba8c-5b40-498c-9510-1b24d0c4661b] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.060835839s
addons_test.go:316: (dbg) Run:  kubectl --context addons-054000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-054000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-054000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.845123675s)
addons_test.go:331: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-054000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-054000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-054000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b380d4e4-2070-4e94-93f4-d6a4434ae0ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b380d4e4-2070-4e94-93f4-d6a4434ae0ea] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.066893723s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-054000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:258: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.25s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-216000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-216000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-9lrmm" [d4cc7bab-e1d6-4596-9774-a5c0c12ab8fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-9lrmm" [d4cc7bab-e1d6-4596-9774-a5c0c12ab8fb] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007312075s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-185000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-185000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-185000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-185000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185000"

                                                
                                                
----------------------- debugLogs end: cilium-185000 [took: 5.648826005s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-185000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-185000
--- SKIP: TestNetworkPlugins/group/cilium (6.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-899000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
Copied to clipboard