Test Report: Docker_Linux 17936

                    
                      37a485e4feb148de92f40b101448d251106852cf:2024-02-16:33175
                    
                

Test fail (9/331)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (518.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0216 16:51:33.934467   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:52:55.856413   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:54:26.472112   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.477437   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.487823   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.508193   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.548566   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.628984   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.789483   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:27.110244   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:27.751158   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:29.031770   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:31.593620   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:36.714395   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:46.955443   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:55:07.436614   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:55:12.011491   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:55:39.697165   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:55:48.397752   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:57:10.319163   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:59:26.472389   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m38.52254508s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-988248] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node ingress-addon-legacy-988248 in cluster ingress-addon-legacy-988248
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525    5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	  Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621    5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	  Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485    5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:50:55.622203   67305 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:50:55.622685   67305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:55.622700   67305 out.go:304] Setting ErrFile to fd 2...
	I0216 16:50:55.622709   67305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:55.623142   67305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:50:55.624200   67305 out.go:298] Setting JSON to false
	I0216 16:50:55.625357   67305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2002,"bootTime":1708100254,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:50:55.625421   67305 start.go:139] virtualization: kvm guest
	I0216 16:50:55.627566   67305 out.go:177] * [ingress-addon-legacy-988248] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 16:50:55.628951   67305 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:50:55.628983   67305 notify.go:220] Checking for updates...
	I0216 16:50:55.630195   67305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:50:55.631463   67305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:50:55.632975   67305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:50:55.634361   67305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 16:50:55.635637   67305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:50:55.637066   67305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:50:55.660386   67305 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:50:55.660498   67305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:50:55.714099   67305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-16 16:50:55.702525693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:50:55.714202   67305 docker.go:295] overlay module found
	I0216 16:50:55.715869   67305 out.go:177] * Using the docker driver based on user configuration
	I0216 16:50:55.717201   67305 start.go:299] selected driver: docker
	I0216 16:50:55.717223   67305 start.go:903] validating driver "docker" against <nil>
	I0216 16:50:55.717237   67305 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:50:55.718243   67305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:50:55.770105   67305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-16 16:50:55.760742037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:50:55.770265   67305 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:50:55.770468   67305 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 16:50:55.772019   67305 out.go:177] * Using Docker driver with root privileges
	I0216 16:50:55.773375   67305 cni.go:84] Creating CNI manager for ""
	I0216 16:50:55.773417   67305 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:50:55.773433   67305 start_flags.go:323] config:
	{Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:50:55.774814   67305 out.go:177] * Starting control plane node ingress-addon-legacy-988248 in cluster ingress-addon-legacy-988248
	I0216 16:50:55.776051   67305 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:50:55.777418   67305 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:50:55.778656   67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:50:55.778772   67305 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:50:55.794808   67305 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 16:50:55.794845   67305 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 16:50:55.879522   67305 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0216 16:50:55.879555   67305 cache.go:56] Caching tarball of preloaded images
	I0216 16:50:55.879766   67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:50:55.881669   67305 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0216 16:50:55.882964   67305 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:50:55.985756   67305 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0216 16:51:07.502110   67305 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:51:07.502206   67305 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:51:08.354473   67305 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0216 16:51:08.354841   67305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json ...
	I0216 16:51:08.354873   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json: {Name:mk98312a6968118c75080ccc2134599c6af7c4ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:08.355039   67305 cache.go:194] Successfully downloaded all kic artifacts
	I0216 16:51:08.355064   67305 start.go:365] acquiring machines lock for ingress-addon-legacy-988248: {Name:mk3ecd0f6305afd0e654759010df5c333f00ace4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 16:51:08.355106   67305 start.go:369] acquired machines lock for "ingress-addon-legacy-988248" in 31.045µs
	I0216 16:51:08.355123   67305 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 16:51:08.355203   67305 start.go:125] createHost starting for "" (driver="docker")
	I0216 16:51:08.357356   67305 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0216 16:51:08.357561   67305 start.go:159] libmachine.API.Create for "ingress-addon-legacy-988248" (driver="docker")
	I0216 16:51:08.357584   67305 client.go:168] LocalClient.Create starting
	I0216 16:51:08.357680   67305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem
	I0216 16:51:08.357715   67305 main.go:141] libmachine: Decoding PEM data...
	I0216 16:51:08.357728   67305 main.go:141] libmachine: Parsing certificate...
	I0216 16:51:08.357775   67305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem
	I0216 16:51:08.357795   67305 main.go:141] libmachine: Decoding PEM data...
	I0216 16:51:08.357803   67305 main.go:141] libmachine: Parsing certificate...
	I0216 16:51:08.358096   67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 16:51:08.373724   67305 cli_runner.go:211] docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 16:51:08.373806   67305 network_create.go:281] running [docker network inspect ingress-addon-legacy-988248] to gather additional debugging logs...
	I0216 16:51:08.373832   67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248
	W0216 16:51:08.389222   67305 cli_runner.go:211] docker network inspect ingress-addon-legacy-988248 returned with exit code 1
	I0216 16:51:08.389263   67305 network_create.go:284] error running [docker network inspect ingress-addon-legacy-988248]: docker network inspect ingress-addon-legacy-988248: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-988248 not found
	I0216 16:51:08.389279   67305 network_create.go:286] output of [docker network inspect ingress-addon-legacy-988248]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-988248 not found
	
	** /stderr **
	I0216 16:51:08.389414   67305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:51:08.406580   67305 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027c1a80}
	I0216 16:51:08.406622   67305 network_create.go:124] attempt to create docker network ingress-addon-legacy-988248 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0216 16:51:08.406668   67305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 ingress-addon-legacy-988248
	I0216 16:51:08.466854   67305 network_create.go:108] docker network ingress-addon-legacy-988248 192.168.49.0/24 created
	I0216 16:51:08.466889   67305 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-988248" container
	I0216 16:51:08.466957   67305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 16:51:08.482302   67305 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-988248 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --label created_by.minikube.sigs.k8s.io=true
	I0216 16:51:08.498526   67305 oci.go:103] Successfully created a docker volume ingress-addon-legacy-988248
	I0216 16:51:08.498675   67305 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-988248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --entrypoint /usr/bin/test -v ingress-addon-legacy-988248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 16:51:10.011341   67305 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-988248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --entrypoint /usr/bin/test -v ingress-addon-legacy-988248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.512603265s)
	I0216 16:51:10.011371   67305 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-988248
	I0216 16:51:10.011393   67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:51:10.011417   67305 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 16:51:10.011494   67305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 16:51:14.755162   67305 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.743612668s)
	I0216 16:51:14.755198   67305 kic.go:203] duration metric: took 4.743778 seconds to extract preloaded images to volume
	W0216 16:51:14.755334   67305 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 16:51:14.755445   67305 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 16:51:14.807343   67305 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-988248 --name ingress-addon-legacy-988248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --network ingress-addon-legacy-988248 --ip 192.168.49.2 --volume ingress-addon-legacy-988248:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 16:51:15.092814   67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Running}}
	I0216 16:51:15.111016   67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 16:51:15.128407   67305 cli_runner.go:164] Run: docker exec ingress-addon-legacy-988248 stat /var/lib/dpkg/alternatives/iptables
	I0216 16:51:15.170453   67305 oci.go:144] the created container "ingress-addon-legacy-988248" has a running status.
	I0216 16:51:15.170492   67305 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa...
	I0216 16:51:15.244092   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0216 16:51:15.244138   67305 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 16:51:15.263254   67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 16:51:15.279182   67305 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 16:51:15.279202   67305 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-988248 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 16:51:15.317437   67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 16:51:15.337076   67305 machine.go:88] provisioning docker machine ...
	I0216 16:51:15.337109   67305 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-988248"
	I0216 16:51:15.337165   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:15.352984   67305 main.go:141] libmachine: Using SSH client type: native
	I0216 16:51:15.353344   67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:51:15.353361   67305 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-988248 && echo "ingress-addon-legacy-988248" | sudo tee /etc/hostname
	I0216 16:51:15.353973   67305 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37462->127.0.0.1:32792: read: connection reset by peer
	I0216 16:51:18.498423   67305 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-988248
	
	I0216 16:51:18.498493   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:18.515375   67305 main.go:141] libmachine: Using SSH client type: native
	I0216 16:51:18.515698   67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:51:18.515718   67305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-988248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-988248/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-988248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 16:51:18.644271   67305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 16:51:18.644304   67305 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 16:51:18.644349   67305 ubuntu.go:177] setting up certificates
	I0216 16:51:18.644370   67305 provision.go:83] configureAuth start
	I0216 16:51:18.644423   67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
	I0216 16:51:18.660770   67305 provision.go:138] copyHostCerts
	I0216 16:51:18.660805   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 16:51:18.660831   67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 16:51:18.660837   67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 16:51:18.660901   67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 16:51:18.660972   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 16:51:18.660990   67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 16:51:18.661007   67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 16:51:18.661039   67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 16:51:18.661102   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 16:51:18.661119   67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 16:51:18.661123   67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 16:51:18.661143   67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 16:51:18.661187   67305 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-988248 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-988248]
	I0216 16:51:18.813075   67305 provision.go:172] copyRemoteCerts
	I0216 16:51:18.813130   67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 16:51:18.813161   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:18.829531   67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:51:18.924594   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0216 16:51:18.924670   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 16:51:18.947206   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0216 16:51:18.947274   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0216 16:51:18.969774   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0216 16:51:18.969836   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0216 16:51:18.992146   67305 provision.go:86] duration metric: configureAuth took 347.762118ms
	I0216 16:51:18.992190   67305 ubuntu.go:193] setting minikube options for container-runtime
	I0216 16:51:18.992374   67305 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 16:51:18.992421   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:19.009095   67305 main.go:141] libmachine: Using SSH client type: native
	I0216 16:51:19.009471   67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:51:19.009485   67305 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 16:51:19.140570   67305 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 16:51:19.140608   67305 ubuntu.go:71] root file system type: overlay
	I0216 16:51:19.140735   67305 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 16:51:19.140799   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:19.157311   67305 main.go:141] libmachine: Using SSH client type: native
	I0216 16:51:19.157645   67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:51:19.157706   67305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 16:51:19.298575   67305 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 16:51:19.298645   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:19.315887   67305 main.go:141] libmachine: Using SSH client type: native
	I0216 16:51:19.316297   67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:51:19.316317   67305 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 16:51:19.994337   67305 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 16:51:19.292587221 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 16:51:19.994371   67305 machine.go:91] provisioned docker machine in 4.657273017s
	I0216 16:51:19.994384   67305 client.go:171] LocalClient.Create took 11.636792969s
	I0216 16:51:19.994401   67305 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-988248" took 11.636839631s
	I0216 16:51:19.994408   67305 start.go:300] post-start starting for "ingress-addon-legacy-988248" (driver="docker")
	I0216 16:51:19.994417   67305 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 16:51:19.994461   67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 16:51:19.994496   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:20.011197   67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:51:20.105286   67305 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 16:51:20.108460   67305 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 16:51:20.108499   67305 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 16:51:20.108511   67305 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 16:51:20.108521   67305 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 16:51:20.108542   67305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 16:51:20.108598   67305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 16:51:20.108728   67305 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 16:51:20.108743   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> /etc/ssl/certs/136192.pem
	I0216 16:51:20.108854   67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 16:51:20.116938   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 16:51:20.138510   67305 start.go:303] post-start completed in 144.091536ms
	I0216 16:51:20.138864   67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
	I0216 16:51:20.154681   67305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json ...
	I0216 16:51:20.155011   67305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 16:51:20.155068   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:20.170852   67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:51:20.260774   67305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 16:51:20.264902   67305 start.go:128] duration metric: createHost completed in 11.909688255s
	I0216 16:51:20.264925   67305 start.go:83] releasing machines lock for "ingress-addon-legacy-988248", held for 11.909809331s
	I0216 16:51:20.264977   67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
	I0216 16:51:20.280983   67305 ssh_runner.go:195] Run: cat /version.json
	I0216 16:51:20.281034   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:20.281082   67305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 16:51:20.281149   67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:51:20.298368   67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:51:20.298754   67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:51:20.387536   67305 ssh_runner.go:195] Run: systemctl --version
	I0216 16:51:20.478143   67305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 16:51:20.482446   67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 16:51:20.504315   67305 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 16:51:20.504384   67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 16:51:20.519602   67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 16:51:20.534747   67305 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 16:51:20.534785   67305 start.go:475] detecting cgroup driver to use...
	I0216 16:51:20.534821   67305 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:51:20.534938   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:51:20.549856   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0216 16:51:20.559177   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 16:51:20.568130   67305 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 16:51:20.568233   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 16:51:20.577369   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:51:20.586323   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 16:51:20.595267   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:51:20.604611   67305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 16:51:20.612748   67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 16:51:20.621445   67305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 16:51:20.629198   67305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 16:51:20.636962   67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:51:20.709750   67305 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 16:51:20.793247   67305 start.go:475] detecting cgroup driver to use...
	I0216 16:51:20.793298   67305 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:51:20.793349   67305 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 16:51:20.804441   67305 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 16:51:20.804503   67305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 16:51:20.815382   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:51:20.831255   67305 ssh_runner.go:195] Run: which cri-dockerd
	I0216 16:51:20.834415   67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 16:51:20.843176   67305 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 16:51:20.861068   67305 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 16:51:20.968587   67305 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 16:51:21.049973   67305 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 16:51:21.050100   67305 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 16:51:21.066463   67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:51:21.136579   67305 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 16:51:21.366158   67305 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:51:21.387632   67305 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:51:21.414900   67305 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0216 16:51:21.414996   67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:51:21.433039   67305 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0216 16:51:21.436925   67305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:51:21.447308   67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:51:21.447379   67305 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:51:21.465592   67305 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 16:51:21.465637   67305 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 16:51:21.465687   67305 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 16:51:21.473676   67305 ssh_runner.go:195] Run: which lz4
	I0216 16:51:21.476774   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0216 16:51:21.476869   67305 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 16:51:21.479921   67305 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 16:51:21.479950   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0216 16:51:22.297868   67305 docker.go:649] Took 0.821034 seconds to copy over tarball
	I0216 16:51:22.297929   67305 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 16:51:24.354361   67305 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.056409131s)
	I0216 16:51:24.354388   67305 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 16:51:24.416476   67305 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 16:51:24.426121   67305 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0216 16:51:24.445221   67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:51:24.521911   67305 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 16:51:27.132947   67305 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.610998812s)
	I0216 16:51:27.133017   67305 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:51:27.150880   67305 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 16:51:27.150902   67305 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 16:51:27.150910   67305 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 16:51:27.152268   67305 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:51:27.152298   67305 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0216 16:51:27.152308   67305 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:51:27.152340   67305 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0216 16:51:27.152269   67305 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:51:27.152297   67305 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:51:27.152463   67305 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:51:27.152522   67305 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:51:27.153173   67305 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:51:27.153281   67305 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:51:27.153296   67305 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0216 16:51:27.153296   67305 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:51:27.153308   67305 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:51:27.153336   67305 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:51:27.153363   67305 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0216 16:51:27.153285   67305 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:51:27.358011   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0216 16:51:27.370246   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0216 16:51:27.375864   67305 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0216 16:51:27.375917   67305 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:51:27.375957   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0216 16:51:27.388762   67305 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0216 16:51:27.388813   67305 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0216 16:51:27.388859   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0216 16:51:27.393579   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0216 16:51:27.408270   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0216 16:51:27.415310   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0216 16:51:27.435039   67305 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0216 16:51:27.435084   67305 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0216 16:51:27.435119   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0216 16:51:27.452312   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0216 16:51:27.522242   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:51:27.525091   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:51:27.527170   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:51:27.527191   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:51:27.543210   67305 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0216 16:51:27.543258   67305 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:51:27.543307   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:51:27.544948   67305 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0216 16:51:27.545001   67305 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:51:27.545041   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:51:27.549983   67305 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0216 16:51:27.550020   67305 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0216 16:51:27.550027   67305 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:51:27.550040   67305 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:51:27.550072   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:51:27.550076   67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:51:27.562758   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0216 16:51:27.564255   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0216 16:51:27.568571   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0216 16:51:27.569668   67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0216 16:51:27.997741   67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:51:28.015355   67305 cache_images.go:92] LoadImages completed in 864.431051ms
	W0216 16:51:28.015452   67305 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0216 16:51:28.015524   67305 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 16:51:28.064031   67305 cni.go:84] Creating CNI manager for ""
	I0216 16:51:28.064055   67305 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:51:28.064068   67305 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 16:51:28.064084   67305 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-988248 NodeName:ingress-addon-legacy-988248 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 16:51:28.064240   67305 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-988248"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 16:51:28.064306   67305 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-988248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 16:51:28.064358   67305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0216 16:51:28.072871   67305 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 16:51:28.072960   67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 16:51:28.081499   67305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0216 16:51:28.096813   67305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0216 16:51:28.112291   67305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0216 16:51:28.128483   67305 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0216 16:51:28.131815   67305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:51:28.142539   67305 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248 for IP: 192.168.49.2
	I0216 16:51:28.142579   67305 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.142731   67305 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 16:51:28.142793   67305 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 16:51:28.142857   67305 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key
	I0216 16:51:28.142874   67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt with IP's: []
	I0216 16:51:28.238957   67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt ...
	I0216 16:51:28.238995   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt: {Name:mk38e6f23e3ecbd1fa8e0f54e1c8bcc52a30609c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.239183   67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key ...
	I0216 16:51:28.239201   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key: {Name:mk9a1cc2fd946429c955c6e35a175fe1c94bbc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.239308   67305 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2
	I0216 16:51:28.239327   67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 16:51:28.372995   67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 ...
	I0216 16:51:28.373027   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2: {Name:mk73a2ac8fec36249407b47a716b224e9495eb84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.373217   67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2 ...
	I0216 16:51:28.373240   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2: {Name:mk53ca1d3d89356c3c4adceada67f4311abde604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.373335   67305 certs.go:337] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt
	I0216 16:51:28.373432   67305 certs.go:341] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key
	I0216 16:51:28.373520   67305 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key
	I0216 16:51:28.373541   67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt with IP's: []
	I0216 16:51:28.491484   67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt ...
	I0216 16:51:28.491521   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt: {Name:mkd95586f34fea099603623625b9eb1f83dece71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.491727   67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key ...
	I0216 16:51:28.491748   67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key: {Name:mk6fa872ea15a338d34e8a60cd2cd3081654123c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:51:28.491853   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0216 16:51:28.491880   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0216 16:51:28.491899   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0216 16:51:28.491918   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0216 16:51:28.491940   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0216 16:51:28.491963   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0216 16:51:28.491985   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0216 16:51:28.492009   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0216 16:51:28.492085   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 16:51:28.492142   67305 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 16:51:28.492176   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 16:51:28.492225   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 16:51:28.492265   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 16:51:28.492304   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 16:51:28.492375   67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 16:51:28.492422   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:51:28.492451   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem -> /usr/share/ca-certificates/13619.pem
	I0216 16:51:28.492473   67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> /usr/share/ca-certificates/136192.pem
	I0216 16:51:28.493059   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 16:51:28.515946   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 16:51:28.537561   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 16:51:28.559344   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 16:51:28.580562   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 16:51:28.601788   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 16:51:28.622954   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 16:51:28.644619   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 16:51:28.666619   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 16:51:28.688978   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 16:51:28.710724   67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 16:51:28.732115   67305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 16:51:28.748121   67305 ssh_runner.go:195] Run: openssl version
	I0216 16:51:28.753129   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 16:51:28.761505   67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:51:28.764814   67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:51:28.764862   67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:51:28.771458   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 16:51:28.779958   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 16:51:28.788471   67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 16:51:28.791656   67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 16:51:28.791716   67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 16:51:28.798054   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 16:51:28.806779   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 16:51:28.815520   67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 16:51:28.818673   67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 16:51:28.818745   67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 16:51:28.825110   67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 16:51:28.833862   67305 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 16:51:28.836992   67305 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 16:51:28.837035   67305 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:51:28.837142   67305 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 16:51:28.853402   67305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 16:51:28.861330   67305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 16:51:28.869251   67305 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 16:51:28.869302   67305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 16:51:28.877038   67305 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 16:51:28.877087   67305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 16:51:28.919141   67305 kubeadm.go:322] W0216 16:51:28.918543    1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 16:51:29.030848   67305 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 16:51:29.079459   67305 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 16:51:29.079721   67305 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 16:51:29.145914   67305 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 16:51:31.548534   67305 kubeadm.go:322] W0216 16:51:31.548201    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:51:31.549498   67305 kubeadm.go:322] W0216 16:51:31.549264    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:55:31.553802   67305 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 16:55:31.553901   67305 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 16:55:31.556558   67305 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 16:55:31.556665   67305 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 16:55:31.556769   67305 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 16:55:31.556859   67305 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 16:55:31.556927   67305 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 16:55:31.556983   67305 kubeadm.go:322] OS: Linux
	I0216 16:55:31.557052   67305 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 16:55:31.557113   67305 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 16:55:31.557154   67305 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 16:55:31.557195   67305 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 16:55:31.557244   67305 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 16:55:31.557284   67305 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 16:55:31.557346   67305 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 16:55:31.557423   67305 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 16:55:31.557522   67305 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 16:55:31.557649   67305 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 16:55:31.557727   67305 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 16:55:31.557782   67305 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 16:55:31.557859   67305 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 16:55:31.559977   67305 out.go:204]   - Generating certificates and keys ...
	I0216 16:55:31.560054   67305 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 16:55:31.560108   67305 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 16:55:31.560193   67305 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 16:55:31.560244   67305 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 16:55:31.560304   67305 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 16:55:31.560345   67305 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 16:55:31.560411   67305 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 16:55:31.560569   67305 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:55:31.560639   67305 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 16:55:31.560756   67305 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:55:31.560819   67305 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 16:55:31.560889   67305 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 16:55:31.560930   67305 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 16:55:31.560988   67305 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 16:55:31.561038   67305 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 16:55:31.561084   67305 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 16:55:31.561138   67305 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 16:55:31.561183   67305 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 16:55:31.561246   67305 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 16:55:31.563286   67305 out.go:204]   - Booting up control plane ...
	I0216 16:55:31.563375   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 16:55:31.563465   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 16:55:31.563545   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 16:55:31.563637   67305 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 16:55:31.563770   67305 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 16:55:31.563814   67305 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 16:55:31.563819   67305 kubeadm.go:322] 
	I0216 16:55:31.563853   67305 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 16:55:31.563887   67305 kubeadm.go:322] 		timed out waiting for the condition
	I0216 16:55:31.563893   67305 kubeadm.go:322] 
	I0216 16:55:31.563924   67305 kubeadm.go:322] 	This error is likely caused by:
	I0216 16:55:31.563958   67305 kubeadm.go:322] 		- The kubelet is not running
	I0216 16:55:31.564067   67305 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 16:55:31.564078   67305 kubeadm.go:322] 
	I0216 16:55:31.564203   67305 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 16:55:31.564236   67305 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 16:55:31.564316   67305 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 16:55:31.564329   67305 kubeadm.go:322] 
	I0216 16:55:31.564412   67305 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 16:55:31.564491   67305 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 16:55:31.564497   67305 kubeadm.go:322] 
	I0216 16:55:31.564568   67305 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 16:55:31.564637   67305 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 16:55:31.564703   67305 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 16:55:31.564740   67305 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 16:55:31.564770   67305 kubeadm.go:322] 
	W0216 16:55:31.564922   67305 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:51:28.918543    1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:51:31.548201    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:51:31.549264    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:51:28.918543    1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:51:31.548201    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:51:31.549264    1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 16:55:31.565007   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 16:55:32.315641   67305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 16:55:32.327490   67305 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 16:55:32.327549   67305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 16:55:32.336388   67305 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 16:55:32.336432   67305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 16:55:32.382152   67305 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 16:55:32.382246   67305 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 16:55:32.565644   67305 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 16:55:32.565730   67305 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 16:55:32.565791   67305 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 16:55:32.565837   67305 kubeadm.go:322] OS: Linux
	I0216 16:55:32.565887   67305 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 16:55:32.565974   67305 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 16:55:32.566053   67305 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 16:55:32.566122   67305 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 16:55:32.566197   67305 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 16:55:32.566249   67305 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 16:55:32.636815   67305 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 16:55:32.636907   67305 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 16:55:32.637021   67305 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 16:55:32.816068   67305 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 16:55:32.817160   67305 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 16:55:32.817203   67305 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 16:55:32.900970   67305 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 16:55:32.904834   67305 out.go:204]   - Generating certificates and keys ...
	I0216 16:55:32.904961   67305 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 16:55:32.905088   67305 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 16:55:32.905205   67305 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 16:55:32.905299   67305 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 16:55:32.905393   67305 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 16:55:32.905466   67305 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 16:55:32.905560   67305 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 16:55:32.905662   67305 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 16:55:32.905771   67305 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 16:55:32.906051   67305 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 16:55:32.906103   67305 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 16:55:32.906203   67305 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 16:55:33.127868   67305 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 16:55:33.303029   67305 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 16:55:33.359183   67305 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 16:55:33.742670   67305 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 16:55:33.743221   67305 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 16:55:33.745360   67305 out.go:204]   - Booting up control plane ...
	I0216 16:55:33.745464   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 16:55:33.749040   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 16:55:33.751004   67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 16:55:33.751530   67305 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 16:55:33.753499   67305 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 16:56:13.753952   67305 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 16:59:33.754911   67305 kubeadm.go:322] 
	I0216 16:59:33.755011   67305 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 16:59:33.755067   67305 kubeadm.go:322] 		timed out waiting for the condition
	I0216 16:59:33.755076   67305 kubeadm.go:322] 
	I0216 16:59:33.755124   67305 kubeadm.go:322] 	This error is likely caused by:
	I0216 16:59:33.755171   67305 kubeadm.go:322] 		- The kubelet is not running
	I0216 16:59:33.755313   67305 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 16:59:33.755360   67305 kubeadm.go:322] 
	I0216 16:59:33.755515   67305 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 16:59:33.755561   67305 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 16:59:33.755613   67305 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 16:59:33.755623   67305 kubeadm.go:322] 
	I0216 16:59:33.755762   67305 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 16:59:33.755880   67305 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 16:59:33.755891   67305 kubeadm.go:322] 
	I0216 16:59:33.755991   67305 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 16:59:33.756063   67305 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 16:59:33.756178   67305 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 16:59:33.756238   67305 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 16:59:33.756248   67305 kubeadm.go:322] 
	I0216 16:59:33.758003   67305 kubeadm.go:322] W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 16:59:33.758237   67305 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 16:59:33.758413   67305 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 16:59:33.758685   67305 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 16:59:33.758797   67305 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 16:59:33.758962   67305 kubeadm.go:322] W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:59:33.759123   67305 kubeadm.go:322] W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:59:33.759240   67305 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 16:59:33.759427   67305 kubeadm.go:406] StartCluster complete in 8m4.922391988s
	I0216 16:59:33.759456   67305 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 16:59:33.759527   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 16:59:33.778294   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.778320   67305 logs.go:278] No container was found matching "kube-apiserver"
	I0216 16:59:33.778380   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 16:59:33.796212   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.796241   67305 logs.go:278] No container was found matching "etcd"
	I0216 16:59:33.796292   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 16:59:33.813696   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.813722   67305 logs.go:278] No container was found matching "coredns"
	I0216 16:59:33.813769   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 16:59:33.832412   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.832437   67305 logs.go:278] No container was found matching "kube-scheduler"
	I0216 16:59:33.832481   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 16:59:33.850975   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.850997   67305 logs.go:278] No container was found matching "kube-proxy"
	I0216 16:59:33.851048   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 16:59:33.868626   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.868652   67305 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 16:59:33.868707   67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 16:59:33.886353   67305 logs.go:276] 0 containers: []
	W0216 16:59:33.886381   67305 logs.go:278] No container was found matching "kindnet"
	I0216 16:59:33.886392   67305 logs.go:123] Gathering logs for kubelet ...
	I0216 16:59:33.886403   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 16:59:33.907062   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525    5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0216 16:59:33.907226   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621    5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0216 16:59:33.913677   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485    5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0216 16:59:33.917372   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:11 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:11.525019    5718 pod_workers.go:191] Error syncing pod 6aefdd7d4cb77909c7f85262968986ab ("etcd-ingress-addon-legacy-988248_kube-system(6aefdd7d4cb77909c7f85262968986ab)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0216 16:59:33.918583   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:12 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:12.525511    5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0216 16:59:33.921422   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:15 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:15.525958    5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0216 16:59:33.925270   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:19 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:19.525531    5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0216 16:59:33.929555   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:24 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:24.526142    5718 pod_workers.go:191] Error syncing pod 6aefdd7d4cb77909c7f85262968986ab ("etcd-ingress-addon-legacy-988248_kube-system(6aefdd7d4cb77909c7f85262968986ab)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0216 16:59:33.931166   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:25 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:25.524555    5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0216 16:59:33.933725   67305 logs.go:138] Found kubelet problem: Feb 16 16:59:28 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:28.525498    5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0216 16:59:33.937737   67305 logs.go:123] Gathering logs for dmesg ...
	I0216 16:59:33.937763   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 16:59:33.955746   67305 logs.go:123] Gathering logs for describe nodes ...
	I0216 16:59:33.955782   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 16:59:34.013814   67305 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 16:59:34.013848   67305 logs.go:123] Gathering logs for Docker ...
	I0216 16:59:34.013859   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 16:59:34.032858   67305 logs.go:123] Gathering logs for container status ...
	I0216 16:59:34.032892   67305 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 16:59:34.069969   67305 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 16:59:34.070020   67305 out.go:239] * 
	* 
	W0216 16:59:34.070082   67305 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 16:59:34.070110   67305 out.go:239] * 
	* 
	W0216 16:59:34.071023   67305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 16:59:34.073878   67305 out.go:177] X Problems detected in kubelet:
	I0216 16:59:34.075682   67305 out.go:177]   Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525    5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0216 16:59:34.077536   67305 out.go:177]   Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621    5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0216 16:59:34.079230   67305 out.go:177]   Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485    5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	I0216 16:59:34.082467   67305 out.go:177] 
	W0216 16:59:34.083898   67305 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:55:32.381558    5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:55:33.748682    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:55:33.750669    5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 16:59:34.083955   67305 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 16:59:34.083985   67305 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 16:59:34.085850   67305 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (518.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (81.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988248 addons enable ingress --alsologtostderr -v=5
E0216 16:59:54.160346   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 17:00:12.011230   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-988248 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m20.894946776s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:59:34.207188   78351 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:59:34.207371   78351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:59:34.207382   78351 out.go:304] Setting ErrFile to fd 2...
	I0216 16:59:34.207389   78351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:59:34.207612   78351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:59:34.207928   78351 mustload.go:65] Loading cluster: ingress-addon-legacy-988248
	I0216 16:59:34.208355   78351 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 16:59:34.208382   78351 addons.go:597] checking whether the cluster is paused
	I0216 16:59:34.208497   78351 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 16:59:34.208515   78351 host.go:66] Checking if "ingress-addon-legacy-988248" exists ...
	I0216 16:59:34.208974   78351 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 16:59:34.226640   78351 ssh_runner.go:195] Run: systemctl --version
	I0216 16:59:34.226702   78351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:59:34.244279   78351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:59:34.336684   78351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 16:59:34.356687   78351 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0216 16:59:34.358553   78351 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 16:59:34.358575   78351 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-988248"
	I0216 16:59:34.358585   78351 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-988248"
	I0216 16:59:34.358622   78351 host.go:66] Checking if "ingress-addon-legacy-988248" exists ...
	I0216 16:59:34.358942   78351 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 16:59:34.378262   78351 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 16:59:34.379830   78351 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0216 16:59:34.381467   78351 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 16:59:34.383021   78351 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0216 16:59:34.383045   78351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0216 16:59:34.383113   78351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 16:59:34.399722   78351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 16:59:34.503955   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:34.568844   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:34.568879   78351 retry.go:31] will retry after 298.97422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:34.868527   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:34.925507   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:34.925536   78351 retry.go:31] will retry after 518.967141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:35.445364   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:35.507476   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:35.507517   78351 retry.go:31] will retry after 526.206248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:36.034233   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:36.088822   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:36.088857   78351 retry.go:31] will retry after 690.350851ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:36.780343   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:36.833885   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:36.833944   78351 retry.go:31] will retry after 955.668264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:37.790081   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:37.846142   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:37.846191   78351 retry.go:31] will retry after 1.875862766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:39.722251   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:39.778963   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:39.779000   78351 retry.go:31] will retry after 3.420394493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:43.202633   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:43.257026   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:43.257060   78351 retry.go:31] will retry after 3.018291164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:46.276869   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:46.334817   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:46.334852   78351 retry.go:31] will retry after 9.031986426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:55.367931   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 16:59:55.423039   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 16:59:55.423085   78351 retry.go:31] will retry after 13.243523076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:08.668878   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:00:08.724713   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:08.724750   78351 retry.go:31] will retry after 19.610676877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:28.336292   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:00:28.390985   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:28.391026   78351 retry.go:31] will retry after 26.578629855s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:54.970501   78351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:00:55.027073   78351 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:00:55.027129   78351 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-988248"
	I0216 17:00:55.029297   78351 out.go:177] * Verifying ingress addon...
	I0216 17:00:55.031800   78351 out.go:177] 
	W0216 17:00:55.033536   78351 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-988248" does not exist: client config: context "ingress-addon-legacy-988248" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-988248" does not exist: client config: context "ingress-addon-legacy-988248" does not exist]
	W0216 17:00:55.033571   78351 out.go:239] * 
	* 
	W0216 17:00:55.035736   78351 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:00:55.037602   78351 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-988248
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-988248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a",
	        "Created": "2024-02-16T16:51:14.822482889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 67946,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:51:15.085373028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/hosts",
	        "LogPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a-json.log",
	        "Name": "/ingress-addon-legacy-988248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-988248:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-988248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-988248",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-988248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-988248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-988248",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-988248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3814a2002a3e88d9bd285523b816c7e6e892c246be6ec2b20a6b90b5164c770",
	            "SandboxKey": "/var/run/docker/netns/c3814a2002a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-988248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "85d30b5bbea5",
	                        "ingress-addon-legacy-988248"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "4d41bdec929d81304b4b2e4e06804c876256b5c17cc70513b00d2a9a47bbea92",
	                    "EndpointID": "9972098f1a8bedbc6f4ca28b1c647e3230007b3e536e57426f0e8de2e8d1130e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-988248",
	                        "85d30b5bbea5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-988248 -n ingress-addon-legacy-988248
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-988248 -n ingress-addon-legacy-988248: exit status 6 (288.867631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:00:55.339024   79712 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-988248" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-988248" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (81.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988248 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-988248 addons enable ingress-dns --alsologtostderr -v=5: signal: killed (222.854862ms)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:00:55.408134   79804 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:00:55.408364   79804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:00:55.408376   79804 out.go:304] Setting ErrFile to fd 2...
	I0216 17:00:55.408381   79804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:00:55.408599   79804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:00:55.408948   79804 mustload.go:65] Loading cluster: ingress-addon-legacy-988248
	I0216 17:00:55.409308   79804 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:00:55.409330   79804 addons.go:597] checking whether the cluster is paused
	I0216 17:00:55.409417   79804 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:00:55.409430   79804 host.go:66] Checking if "ingress-addon-legacy-988248" exists ...
	I0216 17:00:55.409848   79804 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
	I0216 17:00:55.427648   79804 ssh_runner.go:195] Run: systemctl --version
	I0216 17:00:55.427705   79804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
	I0216 17:00:55.444041   79804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
	I0216 17:00:55.536662   79804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:00:55.557383   79804 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0216 17:00:55.559318   79804 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:00:55.559341   79804 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-988248"
	I0216 17:00:55.559348   79804 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-988248"
	I0216 17:00:55.559376   79804 host.go:66] Checking if "ingress-addon-legacy-988248" exists ...
	I0216 17:00:55.559791   79804 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-988248
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-988248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a",
	        "Created": "2024-02-16T16:51:14.822482889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 67946,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:51:15.085373028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/hosts",
	        "LogPath": "/var/lib/docker/containers/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a/85d30b5bbea59e73a04fd4807200d6517224591c3d5e4ec16015d15e413f9e8a-json.log",
	        "Name": "/ingress-addon-legacy-988248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-988248:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-988248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ca47a902314a2c66e886c426c7c2208a594e056e44849c7e25761754fb1aa94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-988248",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-988248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-988248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-988248",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-988248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3814a2002a3e88d9bd285523b816c7e6e892c246be6ec2b20a6b90b5164c770",
	            "SandboxKey": "/var/run/docker/netns/c3814a2002a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-988248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "85d30b5bbea5",
	                        "ingress-addon-legacy-988248"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "4d41bdec929d81304b4b2e4e06804c876256b5c17cc70513b00d2a9a47bbea92",
	                    "EndpointID": "9972098f1a8bedbc6f4ca28b1c647e3230007b3e536e57426f0e8de2e8d1130e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-988248",
	                        "85d30b5bbea5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-988248 -n ingress-addon-legacy-988248
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-988248 -n ingress-addon-legacy-988248: exit status 6 (277.144211ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:00:55.857134   79862 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-988248" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-988248" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (824.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109 (8m36.029173691s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-001550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node kubernetes-upgrade-001550 in cluster kubernetes-upgrade-001550
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.398476    5698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-001550_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.399574    5698 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-kubernetes-upgrade-001550_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:28:38 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:38.386766    5698 pod_workers.go:191] Error syncing pod a92b4fa752bf614c8faca04c9c143a81 ("etcd-kubernetes-upgrade-001550_kube-system(a92b4fa752bf614c8faca04c9c143a81)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:20:18.434054  234987 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:20:18.434212  234987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:20:18.434223  234987 out.go:304] Setting ErrFile to fd 2...
	I0216 17:20:18.434231  234987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:20:18.434551  234987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:20:18.435347  234987 out.go:298] Setting JSON to false
	I0216 17:20:18.436984  234987 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3765,"bootTime":1708100254,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:20:18.437130  234987 start.go:139] virtualization: kvm guest
	I0216 17:20:18.439820  234987 out.go:177] * [kubernetes-upgrade-001550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:20:18.441971  234987 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:20:18.441973  234987 notify.go:220] Checking for updates...
	I0216 17:20:18.444309  234987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:20:18.446037  234987 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:20:18.447664  234987 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:20:18.449350  234987 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:20:18.451051  234987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:20:18.453229  234987 config.go:182] Loaded profile config "cert-expiration-075018": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:20:18.453394  234987 config.go:182] Loaded profile config "missing-upgrade-908834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0216 17:20:18.453511  234987 config.go:182] Loaded profile config "running-upgrade-353292": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0216 17:20:18.453611  234987 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:20:18.479697  234987 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:20:18.479840  234987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:20:18.543408  234987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-16 17:20:18.530405193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:20:18.543548  234987 docker.go:295] overlay module found
	I0216 17:20:18.546727  234987 out.go:177] * Using the docker driver based on user configuration
	I0216 17:20:18.548339  234987 start.go:299] selected driver: docker
	I0216 17:20:18.548363  234987 start.go:903] validating driver "docker" against <nil>
	I0216 17:20:18.548378  234987 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:20:18.549295  234987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:20:18.618418  234987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-16 17:20:18.601175406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:20:18.618655  234987 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 17:20:18.619013  234987 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 17:20:18.623138  234987 out.go:177] * Using Docker driver with root privileges
	I0216 17:20:18.626200  234987 cni.go:84] Creating CNI manager for ""
	I0216 17:20:18.626282  234987 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:20:18.626302  234987 start_flags.go:323] config:
	{Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:20:18.629043  234987 out.go:177] * Starting control plane node kubernetes-upgrade-001550 in cluster kubernetes-upgrade-001550
	I0216 17:20:18.631797  234987 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:20:18.634313  234987 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:20:18.636434  234987 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:20:18.636483  234987 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:20:18.636505  234987 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 17:20:18.636543  234987 cache.go:56] Caching tarball of preloaded images
	I0216 17:20:18.636672  234987 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:20:18.636689  234987 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:20:18.636834  234987 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/config.json ...
	I0216 17:20:18.636861  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/config.json: {Name:mkd08afcf3a60f58760584b975139297d54e2cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:18.654302  234987 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:20:18.654328  234987 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:20:18.654347  234987 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:20:18.654376  234987 start.go:365] acquiring machines lock for kubernetes-upgrade-001550: {Name:mkeeae0b378399243e2da0ed1a5a81f6b7830f0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:20:18.654474  234987 start.go:369] acquired machines lock for "kubernetes-upgrade-001550" in 82.881µs
	I0216 17:20:18.654497  234987 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-001550 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:20:18.654603  234987 start.go:125] createHost starting for "" (driver="docker")
	I0216 17:20:18.656765  234987 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 17:20:18.657064  234987 start.go:159] libmachine.API.Create for "kubernetes-upgrade-001550" (driver="docker")
	I0216 17:20:18.657095  234987 client.go:168] LocalClient.Create starting
	I0216 17:20:18.657180  234987 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem
	I0216 17:20:18.657226  234987 main.go:141] libmachine: Decoding PEM data...
	I0216 17:20:18.657246  234987 main.go:141] libmachine: Parsing certificate...
	I0216 17:20:18.657343  234987 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem
	I0216 17:20:18.657373  234987 main.go:141] libmachine: Decoding PEM data...
	I0216 17:20:18.657388  234987 main.go:141] libmachine: Parsing certificate...
	I0216 17:20:18.657758  234987 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-001550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 17:20:18.676410  234987 cli_runner.go:211] docker network inspect kubernetes-upgrade-001550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 17:20:18.676513  234987 network_create.go:281] running [docker network inspect kubernetes-upgrade-001550] to gather additional debugging logs...
	I0216 17:20:18.676533  234987 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-001550
	W0216 17:20:18.695461  234987 cli_runner.go:211] docker network inspect kubernetes-upgrade-001550 returned with exit code 1
	I0216 17:20:18.695500  234987 network_create.go:284] error running [docker network inspect kubernetes-upgrade-001550]: docker network inspect kubernetes-upgrade-001550: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-001550 not found
	I0216 17:20:18.695518  234987 network_create.go:286] output of [docker network inspect kubernetes-upgrade-001550]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-001550 not found
	
	** /stderr **
	I0216 17:20:18.695618  234987 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:20:18.714370  234987 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c4eff5c28743 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:d1:63:57} reservation:<nil>}
	I0216 17:20:18.714905  234987 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-da77939dda2e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:54:6a:64:c8} reservation:<nil>}
	I0216 17:20:18.715448  234987 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000153d0}
	I0216 17:20:18.715470  234987 network_create.go:124] attempt to create docker network kubernetes-upgrade-001550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0216 17:20:18.715527  234987 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 kubernetes-upgrade-001550
	I0216 17:20:18.778175  234987 network_create.go:108] docker network kubernetes-upgrade-001550 192.168.67.0/24 created
	I0216 17:20:18.778213  234987 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-001550" container
	I0216 17:20:18.778288  234987 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 17:20:18.797193  234987 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-001550 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --label created_by.minikube.sigs.k8s.io=true
	I0216 17:20:18.818806  234987 oci.go:103] Successfully created a docker volume kubernetes-upgrade-001550
	I0216 17:20:18.818897  234987 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-001550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --entrypoint /usr/bin/test -v kubernetes-upgrade-001550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 17:20:21.911355  234987 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-001550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --entrypoint /usr/bin/test -v kubernetes-upgrade-001550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (3.092404897s)
	I0216 17:20:21.911392  234987 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-001550
	I0216 17:20:21.911416  234987 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:20:21.911447  234987 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 17:20:21.911520  234987 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-001550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 17:20:29.083312  234987 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-001550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (7.171740024s)
	I0216 17:20:29.083351  234987 kic.go:203] duration metric: took 7.171901 seconds to extract preloaded images to volume
	W0216 17:20:29.083515  234987 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 17:20:29.083630  234987 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 17:20:29.151731  234987 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-001550 --name kubernetes-upgrade-001550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --network kubernetes-upgrade-001550 --ip 192.168.67.2 --volume kubernetes-upgrade-001550:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 17:20:30.182192  234987 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-001550 --name kubernetes-upgrade-001550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-001550 --network kubernetes-upgrade-001550 --ip 192.168.67.2 --volume kubernetes-upgrade-001550:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf: (1.030388216s)
	I0216 17:20:30.182288  234987 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Running}}
	I0216 17:20:30.202656  234987 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:20:30.223425  234987 cli_runner.go:164] Run: docker exec kubernetes-upgrade-001550 stat /var/lib/dpkg/alternatives/iptables
	I0216 17:20:30.279437  234987 oci.go:144] the created container "kubernetes-upgrade-001550" has a running status.
	I0216 17:20:30.279480  234987 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa...
	I0216 17:20:30.452251  234987 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 17:20:30.478340  234987 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:20:30.503684  234987 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 17:20:30.503711  234987 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-001550 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 17:20:30.563933  234987 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:20:30.588122  234987 machine.go:88] provisioning docker machine ...
	I0216 17:20:30.588195  234987 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-001550"
	I0216 17:20:30.588267  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:30.632723  234987 main.go:141] libmachine: Using SSH client type: native
	I0216 17:20:30.633238  234987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0216 17:20:30.633260  234987 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-001550 && echo "kubernetes-upgrade-001550" | sudo tee /etc/hostname
	I0216 17:20:30.634173  234987 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39468->127.0.0.1:32972: read: connection reset by peer
	I0216 17:20:33.779517  234987 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-001550
	
	I0216 17:20:33.779598  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:33.797649  234987 main.go:141] libmachine: Using SSH client type: native
	I0216 17:20:33.798041  234987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0216 17:20:33.798064  234987 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-001550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-001550/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-001550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:20:33.928435  234987 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:20:33.928467  234987 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:20:33.928495  234987 ubuntu.go:177] setting up certificates
	I0216 17:20:33.928507  234987 provision.go:83] configureAuth start
	I0216 17:20:33.928570  234987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-001550
	I0216 17:20:33.946583  234987 provision.go:138] copyHostCerts
	I0216 17:20:33.946652  234987 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:20:33.946663  234987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:20:33.946729  234987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:20:33.946815  234987 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:20:33.946823  234987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:20:33.946847  234987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:20:33.946899  234987 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:20:33.946906  234987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:20:33.946926  234987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:20:33.946977  234987 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-001550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-001550]
	I0216 17:20:34.187073  234987 provision.go:172] copyRemoteCerts
	I0216 17:20:34.187134  234987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:20:34.187166  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:34.205446  234987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:20:34.301542  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:20:34.325299  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0216 17:20:34.349284  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0216 17:20:34.372767  234987 provision.go:86] duration metric: configureAuth took 444.243751ms
	I0216 17:20:34.372800  234987 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:20:34.372980  234987 config.go:182] Loaded profile config "kubernetes-upgrade-001550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:20:34.373028  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:34.390197  234987 main.go:141] libmachine: Using SSH client type: native
	I0216 17:20:34.390558  234987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0216 17:20:34.390578  234987 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:20:34.528807  234987 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:20:34.528837  234987 ubuntu.go:71] root file system type: overlay
	I0216 17:20:34.528997  234987 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:20:34.529058  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:34.546358  234987 main.go:141] libmachine: Using SSH client type: native
	I0216 17:20:34.546677  234987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0216 17:20:34.546761  234987 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:20:34.691754  234987 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:20:34.691857  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:34.709439  234987 main.go:141] libmachine: Using SSH client type: native
	I0216 17:20:34.709778  234987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0216 17:20:34.709797  234987 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:20:35.434759  234987 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:20:34.686164771 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 17:20:35.434793  234987 machine.go:91] provisioned docker machine in 4.846644631s
	I0216 17:20:35.434807  234987 client.go:171] LocalClient.Create took 16.777700089s
	I0216 17:20:35.434823  234987 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-001550" took 16.777760737s
	I0216 17:20:35.434831  234987 start.go:300] post-start starting for "kubernetes-upgrade-001550" (driver="docker")
	I0216 17:20:35.434840  234987 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:20:35.434898  234987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:20:35.434956  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:35.453894  234987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:20:35.554207  234987 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:20:35.557939  234987 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:20:35.557974  234987 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:20:35.557988  234987 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:20:35.557997  234987 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:20:35.558011  234987 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:20:35.558082  234987 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:20:35.558184  234987 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:20:35.558334  234987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:20:35.570723  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:20:35.602383  234987 start.go:303] post-start completed in 167.528426ms
	I0216 17:20:35.602925  234987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-001550
	I0216 17:20:35.629441  234987 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/config.json ...
	I0216 17:20:35.629927  234987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:20:35.629981  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:35.655940  234987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:20:35.748958  234987 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:20:35.753855  234987 start.go:128] duration metric: createHost completed in 17.099235003s
	I0216 17:20:35.753888  234987 start.go:83] releasing machines lock for "kubernetes-upgrade-001550", held for 17.099401543s
	I0216 17:20:35.753961  234987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-001550
	I0216 17:20:35.772247  234987 ssh_runner.go:195] Run: cat /version.json
	I0216 17:20:35.772303  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:35.772322  234987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:20:35.772379  234987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:20:35.792344  234987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:20:35.793207  234987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:20:35.884429  234987 ssh_runner.go:195] Run: systemctl --version
	I0216 17:20:35.979478  234987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 17:20:35.984041  234987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 17:20:36.007682  234987 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 17:20:36.007774  234987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:20:36.024447  234987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:20:36.040852  234987 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 17:20:36.040890  234987 start.go:475] detecting cgroup driver to use...
	I0216 17:20:36.040920  234987 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:20:36.041062  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:20:36.057532  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:20:36.067088  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:20:36.076655  234987 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:20:36.076720  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:20:36.086178  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:20:36.096030  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:20:36.105756  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:20:36.115543  234987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:20:36.124728  234987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:20:36.134470  234987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:20:36.142749  234987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:20:36.151877  234987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:20:36.230681  234987 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:20:36.322020  234987 start.go:475] detecting cgroup driver to use...
	I0216 17:20:36.322092  234987 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:20:36.322151  234987 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:20:36.334518  234987 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:20:36.334633  234987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:20:36.348149  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:20:36.365520  234987 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:20:36.369027  234987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:20:36.379191  234987 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:20:36.407471  234987 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:20:36.500740  234987 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:20:36.584345  234987 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:20:36.584492  234987 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:20:36.614360  234987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:20:36.694403  234987 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:20:36.941631  234987 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:20:36.970021  234987 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:20:36.998610  234987 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:20:36.998728  234987 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-001550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:20:37.016751  234987 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0216 17:20:37.020544  234987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:20:37.031963  234987 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:20:37.032020  234987 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:20:37.051186  234987 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:20:37.051204  234987 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:20:37.051266  234987 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:20:37.059774  234987 ssh_runner.go:195] Run: which lz4
	I0216 17:20:37.063106  234987 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:20:37.066287  234987 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:20:37.066338  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 17:20:37.904903  234987 docker.go:649] Took 0.841832 seconds to copy over tarball
	I0216 17:20:37.904964  234987 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:20:40.094391  234987 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189399677s)
	I0216 17:20:40.094420  234987 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:20:40.166410  234987 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:20:40.176310  234987 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:20:40.195565  234987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:20:40.272448  234987 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:20:43.132851  234987 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.860369055s)
	I0216 17:20:43.132924  234987 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:20:43.154750  234987 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:20:43.154781  234987 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:20:43.154792  234987 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:20:43.156297  234987 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:20:43.156375  234987 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:20:43.156380  234987 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:20:43.156297  234987 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:20:43.156394  234987 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:20:43.156400  234987 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:20:43.156297  234987 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:20:43.156384  234987 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:20:43.157331  234987 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:20:43.157443  234987 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:20:43.157459  234987 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:20:43.157443  234987 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:20:43.157491  234987 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:20:43.157452  234987 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:20:43.157488  234987 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:20:43.157346  234987 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:20:43.369944  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:20:43.391079  234987 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 17:20:43.391131  234987 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:20:43.391176  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:20:43.408905  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:20:43.411798  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 17:20:43.429937  234987 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 17:20:43.429976  234987 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:20:43.430014  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:20:43.437638  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:20:43.453126  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:20:43.460668  234987 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 17:20:43.460721  234987 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:20:43.460768  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:20:43.480536  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 17:20:43.486576  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:20:43.489093  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:20:43.507722  234987 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 17:20:43.507776  234987 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:20:43.507822  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:20:43.512739  234987 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 17:20:43.512800  234987 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:20:43.512841  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:20:43.515638  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:20:43.533849  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:20:43.534897  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:20:43.536793  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:20:43.540632  234987 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 17:20:43.540682  234987 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:20:43.540734  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:20:43.561564  234987 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 17:20:43.561580  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:20:43.561620  234987 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:20:43.561658  234987 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:20:43.581104  234987 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:20:43.941470  234987 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:20:43.963464  234987 cache_images.go:92] LoadImages completed in 808.652192ms
	W0216 17:20:43.963550  234987 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:20:43.963606  234987 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:20:44.023845  234987 cni.go:84] Creating CNI manager for ""
	I0216 17:20:44.023878  234987 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:20:44.023900  234987 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:20:44.023923  234987 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-001550 NodeName:kubernetes-upgrade-001550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:20:44.024083  234987 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-001550"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-001550
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:20:44.024197  234987 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-001550 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:20:44.024280  234987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:20:44.034062  234987 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:20:44.034119  234987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:20:44.043424  234987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0216 17:20:44.061560  234987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:20:44.081029  234987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0216 17:20:44.100141  234987 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:20:44.104722  234987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:20:44.116970  234987 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550 for IP: 192.168.67.2
	I0216 17:20:44.117004  234987 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.117140  234987 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:20:44.117197  234987 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:20:44.117261  234987 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key
	I0216 17:20:44.117277  234987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt with IP's: []
	I0216 17:20:44.365296  234987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt ...
	I0216 17:20:44.365330  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt: {Name:mk2a1397e1bc56896600f9fbc98bca7165718354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.365535  234987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key ...
	I0216 17:20:44.365556  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key: {Name:mk079f85ab2da7b04fa2a135e0a8a22795ebd13d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.365665  234987 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key.c7fa3a9e
	I0216 17:20:44.365685  234987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 17:20:44.436122  234987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt.c7fa3a9e ...
	I0216 17:20:44.436165  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt.c7fa3a9e: {Name:mk8a7aad8fd331761f6f041a7ffd1aa8e65252c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.436333  234987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key.c7fa3a9e ...
	I0216 17:20:44.436347  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key.c7fa3a9e: {Name:mk9733bd5d1b38e71a93a9cde01ee6098a2c08c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.436431  234987 certs.go:337] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt
	I0216 17:20:44.436498  234987 certs.go:341] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key
	I0216 17:20:44.436566  234987 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key
	I0216 17:20:44.436589  234987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.crt with IP's: []
	I0216 17:20:44.650368  234987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.crt ...
	I0216 17:20:44.650408  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.crt: {Name:mkd91592f065e3dbe14d927adce0d97cedab82a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.650624  234987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key ...
	I0216 17:20:44.650642  234987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key: {Name:mke175c747c4554f841f0cd0fcff63aa898a8b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:20:44.650870  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:20:44.650924  234987 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:20:44.650944  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:20:44.650989  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:20:44.651034  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:20:44.651061  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:20:44.651118  234987 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:20:44.651897  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:20:44.677254  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 17:20:44.704592  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:20:44.731630  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 17:20:44.757201  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:20:44.782446  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:20:44.808260  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:20:44.835339  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:20:44.861412  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:20:44.887068  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:20:44.912032  234987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:20:44.937109  234987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:20:44.955176  234987 ssh_runner.go:195] Run: openssl version
	I0216 17:20:44.962201  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:20:44.974218  234987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:20:44.978772  234987 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:20:44.978851  234987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:20:44.985917  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:20:44.996037  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:20:45.005946  234987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:20:45.009620  234987 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:20:45.009671  234987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:20:45.016352  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:20:45.027283  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:20:45.036894  234987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:20:45.041720  234987 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:20:45.041793  234987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:20:45.050967  234987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:20:45.062809  234987 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:20:45.066351  234987 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 17:20:45.066406  234987 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:20:45.066531  234987 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:20:45.085322  234987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:20:45.094010  234987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:20:45.102735  234987 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:20:45.102802  234987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:20:45.111034  234987 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:20:45.111075  234987 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:20:45.159614  234987 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:20:45.159664  234987 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:20:45.337450  234987 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:20:45.337544  234987 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:20:45.337608  234987 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:20:45.337657  234987 kubeadm.go:322] OS: Linux
	I0216 17:20:45.337731  234987 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:20:45.337803  234987 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:20:45.337902  234987 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:20:45.337973  234987 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:20:45.338043  234987 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:20:45.338122  234987 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:20:45.412861  234987 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:20:45.413024  234987 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:20:45.413157  234987 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:20:45.605407  234987 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:20:45.608643  234987 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:20:45.619009  234987 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:20:45.695896  234987 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:20:45.699052  234987 out.go:204]   - Generating certificates and keys ...
	I0216 17:20:45.699166  234987 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:20:45.699239  234987 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:20:45.878523  234987 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 17:20:46.114445  234987 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 17:20:46.228248  234987 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 17:20:46.302707  234987 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 17:20:46.445801  234987 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 17:20:46.445996  234987 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 17:20:46.714928  234987 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 17:20:46.715153  234987 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 17:20:47.103120  234987 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 17:20:47.266211  234987 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 17:20:47.589529  234987 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 17:20:47.589757  234987 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:20:47.846858  234987 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:20:48.054785  234987 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:20:48.223645  234987 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:20:48.466488  234987 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:20:48.467612  234987 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:20:48.469686  234987 out.go:204]   - Booting up control plane ...
	I0216 17:20:48.469833  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:20:48.495735  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:20:48.496988  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:20:48.498363  234987 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:20:48.502751  234987 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:21:28.503010  234987 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:24:48.504073  234987 kubeadm.go:322] 
	I0216 17:24:48.504177  234987 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:24:48.504239  234987 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:24:48.504253  234987 kubeadm.go:322] 
	I0216 17:24:48.504293  234987 kubeadm.go:322] This error is likely caused by:
	I0216 17:24:48.504347  234987 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:24:48.504476  234987 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:24:48.504495  234987 kubeadm.go:322] 
	I0216 17:24:48.504726  234987 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:24:48.504822  234987 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:24:48.504884  234987 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:24:48.504894  234987 kubeadm.go:322] 
	I0216 17:24:48.505021  234987 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:24:48.505154  234987 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:24:48.505263  234987 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:24:48.505338  234987 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:24:48.505444  234987 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:24:48.505496  234987 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:24:48.508565  234987 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:24:48.508740  234987 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:24:48.508990  234987 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:24:48.509113  234987 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:24:48.509212  234987 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:24:48.509295  234987 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 17:24:48.509532  234987 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001550 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:24:48.509603  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:24:51.414497  234987 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.904808792s)
	I0216 17:24:51.414572  234987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:24:51.432122  234987 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:24:51.432212  234987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:24:51.442955  234987 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:24:51.443026  234987 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:24:51.600079  234987 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:24:51.600141  234987 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:24:51.899684  234987 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:24:51.899772  234987 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:24:51.899838  234987 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:24:51.899881  234987 kubeadm.go:322] OS: Linux
	I0216 17:24:51.899935  234987 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:24:51.899994  234987 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:24:51.900053  234987 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:24:51.900113  234987 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:24:51.900177  234987 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:24:51.900233  234987 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:24:52.009426  234987 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:24:52.009566  234987 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:24:52.009683  234987 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:24:52.311480  234987 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:24:52.313559  234987 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:24:52.327403  234987 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:24:52.440059  234987 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:24:52.442917  234987 out.go:204]   - Generating certificates and keys ...
	I0216 17:24:52.442988  234987 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:24:52.443039  234987 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:24:52.443098  234987 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:24:52.443144  234987 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:24:52.443198  234987 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:24:52.443239  234987 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:24:52.443961  234987 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:24:52.444041  234987 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:24:52.444126  234987 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:24:52.444234  234987 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:24:52.444275  234987 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:24:52.444332  234987 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:24:52.760440  234987 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:24:52.955023  234987 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:24:53.586030  234987 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:24:54.004914  234987 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:24:54.006121  234987 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:24:54.008594  234987 out.go:204]   - Booting up control plane ...
	I0216 17:24:54.008740  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:24:54.015132  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:24:54.016898  234987 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:24:54.017835  234987 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:24:54.021021  234987 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:25:34.021198  234987 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:28:54.022357  234987 kubeadm.go:322] 
	I0216 17:28:54.022475  234987 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:28:54.022520  234987 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:28:54.022532  234987 kubeadm.go:322] 
	I0216 17:28:54.022567  234987 kubeadm.go:322] This error is likely caused by:
	I0216 17:28:54.022598  234987 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:28:54.022724  234987 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:28:54.022746  234987 kubeadm.go:322] 
	I0216 17:28:54.022914  234987 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:28:54.022971  234987 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:28:54.023016  234987 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:28:54.023029  234987 kubeadm.go:322] 
	I0216 17:28:54.023147  234987 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:28:54.023226  234987 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:28:54.023297  234987 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:28:54.023356  234987 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:28:54.023466  234987 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:28:54.023506  234987 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:28:54.026042  234987 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:28:54.026207  234987 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:28:54.026395  234987 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:28:54.026488  234987 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:28:54.026565  234987 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:28:54.026647  234987 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:28:54.026747  234987 kubeadm.go:406] StartCluster complete in 8m8.960347487s
	I0216 17:28:54.026833  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:28:54.044969  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.045002  234987 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:28:54.045075  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:28:54.063180  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.063204  234987 logs.go:278] No container was found matching "etcd"
	I0216 17:28:54.063255  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:28:54.080473  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.080504  234987 logs.go:278] No container was found matching "coredns"
	I0216 17:28:54.080566  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:28:54.098968  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.098995  234987 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:28:54.099057  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:28:54.117429  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.117453  234987 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:28:54.117496  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:28:54.137395  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.137422  234987 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:28:54.137477  234987 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:28:54.157823  234987 logs.go:276] 0 containers: []
	W0216 17:28:54.157850  234987 logs.go:278] No container was found matching "kindnet"
	I0216 17:28:54.157863  234987 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:28:54.157881  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:28:54.227654  234987 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:28:54.227685  234987 logs.go:123] Gathering logs for Docker ...
	I0216 17:28:54.227699  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:28:54.250778  234987 logs.go:123] Gathering logs for container status ...
	I0216 17:28:54.250817  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:28:54.290148  234987 logs.go:123] Gathering logs for kubelet ...
	I0216 17:28:54.290179  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:28:54.312394  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.398476    5698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-001550_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:28:54.312588  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.399574    5698 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-kubernetes-upgrade-001550_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:28:54.323594  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:38 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:38.386766    5698 pod_workers.go:191] Error syncing pod a92b4fa752bf614c8faca04c9c143a81 ("etcd-kubernetes-upgrade-001550_kube-system(a92b4fa752bf614c8faca04c9c143a81)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:28:54.331717  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:42 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:42.389938    5698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-001550_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:28:54.336071  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:44 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:44.388361    5698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-001550_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:28:54.341418  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:47 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:47.386307    5698 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-kubernetes-upgrade-001550_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:28:54.347269  234987 logs.go:138] Found kubelet problem: Feb 16 17:28:50 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:50.384229    5698 pod_workers.go:191] Error syncing pod a92b4fa752bf614c8faca04c9c143a81 ("etcd-kubernetes-upgrade-001550_kube-system(a92b4fa752bf614c8faca04c9c143a81)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:28:54.353558  234987 logs.go:123] Gathering logs for dmesg ...
	I0216 17:28:54.353583  234987 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 17:28:54.377630  234987 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:28:54.377676  234987 out.go:239] * 
	* 
	W0216 17:28:54.377740  234987 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:28:54.377765  234987 out.go:239] * 
	* 
	W0216 17:28:54.378631  234987 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:28:54.381506  234987 out.go:177] X Problems detected in kubelet:
	I0216 17:28:54.382996  234987 out.go:177]   Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.398476    5698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-001550_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:28:54.384438  234987 out.go:177]   Feb 16 17:28:32 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:32.399574    5698 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-kubernetes-upgrade-001550_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:28:54.385809  234987 out.go:177]   Feb 16 17:28:38 kubernetes-upgrade-001550 kubelet[5698]: E0216 17:28:38.386766    5698 pod_workers.go:191] Error syncing pod a92b4fa752bf614c8faca04c9c143a81 ("etcd-kubernetes-upgrade-001550_kube-system(a92b4fa752bf614c8faca04c9c143a81)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:28:54.388895  234987 out.go:177] 
	W0216 17:28:54.390146  234987 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:28:54.390186  234987 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:28:54.390209  234987 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:28:54.391906  234987 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-001550
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-001550: (2.481823129s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-001550 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-001550 status --format={{.Host}}: exit status 7 (88.657761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m32.783741334s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-001550 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (82.166829ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-001550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-001550
	    minikube start -p kubernetes-upgrade-001550 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0015502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-001550 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0216 17:33:37.749697   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:37.754956   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:37.765211   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:37.785484   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:37.825766   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:37.906044   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:38.066458   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:38.387050   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:39.027980   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:40.308974   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:40.561395   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:33:42.869669   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:46.477549   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:47.990582   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:33:52.115769   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:33:56.686790   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:33:58.231642   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-001550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.451254718s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-16 17:33:58.391925843 +0000 UTC m=+3149.861931437
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-001550
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-001550:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7",
	        "Created": "2024-02-16T17:20:29.172780985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 371538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:28:57.58854336Z",
	            "FinishedAt": "2024-02-16T17:28:56.634659486Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/hosts",
	        "LogPath": "/var/lib/docker/containers/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7-json.log",
	        "Name": "/kubernetes-upgrade-001550",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-001550:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-001550",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c4cbc5d59453ea09a74e38f4ea9039a9c5ef3880488ffc5fa59fe92788b1d58-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c4cbc5d59453ea09a74e38f4ea9039a9c5ef3880488ffc5fa59fe92788b1d58/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c4cbc5d59453ea09a74e38f4ea9039a9c5ef3880488ffc5fa59fe92788b1d58/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c4cbc5d59453ea09a74e38f4ea9039a9c5ef3880488ffc5fa59fe92788b1d58/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-001550",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-001550/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-001550",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-001550",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-001550",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffbdb8ca44fb909e6cd96e2422149cefaaefcfadc00c41c68f15822d03a0ed7f",
	            "SandboxKey": "/var/run/docker/netns/ffbdb8ca44fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-001550": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2e218160c1a8",
	                        "kubernetes-upgrade-001550"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "05405816cf1b8c30be2a2c75f1919f6b9a653612f6f8fb36cde86ad885fafd4e",
	                    "EndpointID": "ebadc79a3cd4f3f32fcf4901f538b7a3ab6a6abf4dac43bfb4aaadc45af6be95",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-001550",
	                        "2e218160c1a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-001550 -n kubernetes-upgrade-001550
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-001550 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | systemctl cat cri-docker                               |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo cat                             | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo cat                             | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | cri-dockerd --version                                  |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | systemctl status containerd                            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | systemctl cat containerd                               |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo cat                             | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | /lib/systemd/system/containerd.service                 |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo cat                             | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | containerd config dump                                 |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC |                     |
	|         | systemctl status crio --all                            |                           |         |         |                     |                     |
	|         | --full --no-pager                                      |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo                                 | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | systemctl cat crio --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo find                            | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p kubenet-123826 sudo crio                            | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p kubenet-123826                                      | kubenet-123826            | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:29 UTC |
	| start   | -p embed-certs-162802                                  | embed-certs-162802        | jenkins | v1.32.0 | 16 Feb 24 17:29 UTC | 16 Feb 24 17:30 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-162802            | embed-certs-162802        | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-162802                                  | embed-certs-162802        | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-408847             | no-preload-408847         | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-408847                                   | no-preload-408847         | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-162802                 | embed-certs-162802        | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-162802                                  | embed-certs-162802        | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408847                  | no-preload-408847         | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC | 16 Feb 24 17:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-408847                                   | no-preload-408847         | jenkins | v1.32.0 | 16 Feb 24 17:30 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-001550                           | kubernetes-upgrade-001550 | jenkins | v1.32.0 | 16 Feb 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-001550                           | kubernetes-upgrade-001550 | jenkins | v1.32.0 | 16 Feb 24 17:33 UTC | 16 Feb 24 17:33 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 17:33:29
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 17:33:29.993201  409577 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:33:29.993352  409577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:33:29.993365  409577 out.go:304] Setting ErrFile to fd 2...
	I0216 17:33:29.993373  409577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:33:29.993579  409577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:33:29.994137  409577 out.go:298] Setting JSON to false
	I0216 17:33:29.995511  409577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4556,"bootTime":1708100254,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:33:29.995572  409577 start.go:139] virtualization: kvm guest
	I0216 17:33:29.997640  409577 out.go:177] * [kubernetes-upgrade-001550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:33:29.999181  409577 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:33:29.999144  409577 notify.go:220] Checking for updates...
	I0216 17:33:30.000605  409577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:33:30.003938  409577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:33:30.005451  409577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:33:30.006880  409577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:33:30.008420  409577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:33:30.010351  409577 config.go:182] Loaded profile config "kubernetes-upgrade-001550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 17:33:30.010818  409577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:33:30.035441  409577 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:33:30.035589  409577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:33:30.090986  409577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:96 SystemTime:2024-02-16 17:33:30.08044323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:33:30.091085  409577 docker.go:295] overlay module found
	I0216 17:33:30.093192  409577 out.go:177] * Using the docker driver based on existing profile
	I0216 17:33:30.094570  409577 start.go:299] selected driver: docker
	I0216 17:33:30.094588  409577 start.go:903] validating driver "docker" against &{Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-001550 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:33:30.094703  409577 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:33:30.095539  409577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:33:30.150675  409577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:96 SystemTime:2024-02-16 17:33:30.141517011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:33:30.151061  409577 cni.go:84] Creating CNI manager for ""
	I0216 17:33:30.151091  409577 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:33:30.151116  409577 start_flags.go:323] config:
	{Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:33:30.153159  409577 out.go:177] * Starting control plane node kubernetes-upgrade-001550 in cluster kubernetes-upgrade-001550
	I0216 17:33:30.154473  409577 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:33:30.155764  409577 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:33:30.156950  409577 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 17:33:30.156995  409577 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 17:33:30.157011  409577 cache.go:56] Caching tarball of preloaded images
	I0216 17:33:30.157051  409577 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:33:30.157196  409577 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:33:30.157218  409577 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0216 17:33:30.157328  409577 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/config.json ...
	I0216 17:33:30.175607  409577 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:33:30.175631  409577 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:33:30.175652  409577 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:33:30.175689  409577 start.go:365] acquiring machines lock for kubernetes-upgrade-001550: {Name:mkeeae0b378399243e2da0ed1a5a81f6b7830f0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:33:30.175772  409577 start.go:369] acquired machines lock for "kubernetes-upgrade-001550" in 48.971µs
	I0216 17:33:30.175797  409577 start.go:96] Skipping create...Using existing machine configuration
	I0216 17:33:30.175806  409577 fix.go:54] fixHost starting: 
	I0216 17:33:30.176032  409577 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:33:30.193135  409577 fix.go:102] recreateIfNeeded on kubernetes-upgrade-001550: state=Running err=<nil>
	W0216 17:33:30.193166  409577 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 17:33:30.195283  409577 out.go:177] * Updating the running docker "kubernetes-upgrade-001550" container ...
	I0216 17:33:26.587895  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:29.086275  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:29.827250  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:31.827633  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:30.196933  409577 machine.go:88] provisioning docker machine ...
	I0216 17:33:30.196965  409577 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-001550"
	I0216 17:33:30.197026  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:30.214922  409577 main.go:141] libmachine: Using SSH client type: native
	I0216 17:33:30.215325  409577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I0216 17:33:30.215348  409577 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-001550 && echo "kubernetes-upgrade-001550" | sudo tee /etc/hostname
	I0216 17:33:30.360812  409577 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-001550
	
	I0216 17:33:30.360906  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:30.378747  409577 main.go:141] libmachine: Using SSH client type: native
	I0216 17:33:30.379208  409577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I0216 17:33:30.379235  409577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-001550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-001550/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-001550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:33:30.512788  409577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:33:30.512827  409577 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:33:30.512866  409577 ubuntu.go:177] setting up certificates
	I0216 17:33:30.512880  409577 provision.go:83] configureAuth start
	I0216 17:33:30.512943  409577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-001550
	I0216 17:33:30.531543  409577 provision.go:138] copyHostCerts
	I0216 17:33:30.531605  409577 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:33:30.531620  409577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:33:30.531702  409577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:33:30.531833  409577 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:33:30.531848  409577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:33:30.531885  409577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:33:30.531975  409577 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:33:30.532056  409577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:33:30.532249  409577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:33:30.532366  409577 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-001550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-001550]
	I0216 17:33:30.593117  409577 provision.go:172] copyRemoteCerts
	I0216 17:33:30.593191  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:33:30.593235  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:30.612090  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:30.710165  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:33:30.733251  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0216 17:33:30.756623  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0216 17:33:30.779217  409577 provision.go:86] duration metric: configureAuth took 266.325154ms
	I0216 17:33:30.779245  409577 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:33:30.779393  409577 config.go:182] Loaded profile config "kubernetes-upgrade-001550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 17:33:30.779436  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:30.797358  409577 main.go:141] libmachine: Using SSH client type: native
	I0216 17:33:30.797849  409577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I0216 17:33:30.797877  409577 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:33:30.937041  409577 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:33:30.937065  409577 ubuntu.go:71] root file system type: overlay
	I0216 17:33:30.937192  409577 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:33:30.937264  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:30.955007  409577 main.go:141] libmachine: Using SSH client type: native
	I0216 17:33:30.955453  409577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I0216 17:33:30.955549  409577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:33:31.099339  409577 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:33:31.099436  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:31.117529  409577 main.go:141] libmachine: Using SSH client type: native
	I0216 17:33:31.117911  409577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I0216 17:33:31.117933  409577 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:33:31.254005  409577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:33:31.254035  409577 machine.go:91] provisioned docker machine in 1.057084587s
	I0216 17:33:31.254049  409577 start.go:300] post-start starting for "kubernetes-upgrade-001550" (driver="docker")
	I0216 17:33:31.254067  409577 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:33:31.254133  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:33:31.254183  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:31.272087  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:31.369263  409577 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:33:31.372795  409577 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:33:31.372826  409577 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:33:31.372835  409577 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:33:31.372841  409577 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:33:31.372852  409577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:33:31.372896  409577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:33:31.372961  409577 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:33:31.373041  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:33:31.381002  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:33:31.406009  409577 start.go:303] post-start completed in 151.942995ms
	I0216 17:33:31.406119  409577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:33:31.406175  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:31.424665  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:31.517274  409577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:33:31.522068  409577 fix.go:56] fixHost completed within 1.346249485s
	I0216 17:33:31.522099  409577 start.go:83] releasing machines lock for "kubernetes-upgrade-001550", held for 1.346314849s
	I0216 17:33:31.522182  409577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-001550
	I0216 17:33:31.539587  409577 ssh_runner.go:195] Run: cat /version.json
	I0216 17:33:31.539640  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:31.539674  409577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:33:31.539751  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:31.557858  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:31.558140  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:31.648353  409577 ssh_runner.go:195] Run: systemctl --version
	I0216 17:33:31.738151  409577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 17:33:31.743162  409577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 17:33:31.743241  409577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:33:31.753085  409577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:33:31.762030  409577 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 17:33:31.762072  409577 start.go:475] detecting cgroup driver to use...
	I0216 17:33:31.762106  409577 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:33:31.762210  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:33:31.778268  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 17:33:31.789170  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:33:31.799607  409577 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:33:31.799729  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:33:31.811622  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:33:31.822344  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:33:31.832430  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:33:31.842925  409577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:33:31.852009  409577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:33:31.862040  409577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:33:31.870391  409577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:33:31.879016  409577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:33:31.974058  409577 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:33:31.589009  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:34.086800  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:34.326579  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:36.826218  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:36.586288  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:39.086361  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:42.129297  409577 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.15519401s)
	I0216 17:33:42.129393  409577 start.go:475] detecting cgroup driver to use...
	I0216 17:33:42.129432  409577 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:33:42.129490  409577 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:33:42.140832  409577 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:33:42.140894  409577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:33:42.152808  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:33:42.169576  409577 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:33:42.172804  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:33:42.181436  409577 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:33:42.202819  409577 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:33:42.313061  409577 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:33:42.422603  409577 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:33:42.422733  409577 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:33:42.440947  409577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:33:42.525833  409577 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:33:42.813758  409577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 17:33:42.827779  409577 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 17:33:42.846182  409577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 17:33:42.857341  409577 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 17:33:42.937022  409577 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 17:33:43.024540  409577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:33:43.099571  409577 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 17:33:43.113115  409577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 17:33:43.124451  409577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:33:43.208224  409577 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 17:33:43.279472  409577 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 17:33:43.279531  409577 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 17:33:43.283730  409577 start.go:543] Will wait 60s for crictl version
	I0216 17:33:43.283779  409577 ssh_runner.go:195] Run: which crictl
	I0216 17:33:43.287036  409577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 17:33:43.336215  409577 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 17:33:43.336280  409577 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:33:43.360345  409577 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:33:38.826546  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:40.826648  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:42.827555  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:43.387624  409577 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0216 17:33:43.387720  409577 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-001550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:33:43.404672  409577 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0216 17:33:43.408728  409577 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 17:33:43.408786  409577 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:33:43.428842  409577 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:33:43.428874  409577 docker.go:615] Images already preloaded, skipping extraction
	I0216 17:33:43.428933  409577 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:33:43.448844  409577 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:33:43.448873  409577 cache_images.go:84] Images are preloaded, skipping loading
	I0216 17:33:43.448932  409577 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:33:43.505743  409577 cni.go:84] Creating CNI manager for ""
	I0216 17:33:43.505771  409577 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:33:43.505785  409577 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:33:43.505801  409577 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-001550 NodeName:kubernetes-upgrade-001550 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 17:33:43.505974  409577 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-001550"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:33:43.506069  409577 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-001550 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:33:43.506162  409577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0216 17:33:43.515112  409577 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:33:43.515193  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:33:43.524176  409577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0216 17:33:43.541979  409577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0216 17:33:43.560536  409577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0216 17:33:43.578145  409577 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:33:43.581934  409577 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550 for IP: 192.168.67.2
	I0216 17:33:43.581974  409577 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:33:43.582148  409577 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:33:43.582205  409577 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:33:43.582328  409577 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key
	I0216 17:33:43.582408  409577 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key.c7fa3a9e
	I0216 17:33:43.582455  409577 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key
	I0216 17:33:43.582620  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:33:43.582657  409577 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:33:43.582675  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:33:43.582720  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:33:43.582759  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:33:43.582802  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:33:43.582871  409577 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:33:43.583780  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:33:43.609216  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 17:33:43.634550  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:33:43.658465  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 17:33:43.683001  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:33:43.710258  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:33:43.740861  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:33:43.764821  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:33:43.787414  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:33:43.812970  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:33:43.837712  409577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:33:43.862481  409577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:33:43.879895  409577 ssh_runner.go:195] Run: openssl version
	I0216 17:33:43.885212  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:33:43.894666  409577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:33:43.898169  409577 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:33:43.898228  409577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:33:43.905069  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:33:43.914823  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:33:43.925574  409577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:33:43.929449  409577 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:33:43.929543  409577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:33:43.936111  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:33:43.945181  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:33:43.954771  409577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:33:43.958331  409577 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:33:43.958388  409577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:33:43.965060  409577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:33:43.973734  409577 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:33:43.977132  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 17:33:43.983508  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 17:33:43.989850  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 17:33:43.996423  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 17:33:44.002973  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 17:33:44.009926  409577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 17:33:44.016655  409577 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-001550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-001550 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:33:44.016801  409577 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:33:44.035025  409577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:33:44.043735  409577 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 17:33:44.043784  409577 kubeadm.go:636] restartCluster start
	I0216 17:33:44.043841  409577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 17:33:44.052533  409577 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:44.053469  409577 kubeconfig.go:92] found "kubernetes-upgrade-001550" server: "https://192.168.67.2:8443"
	I0216 17:33:44.054797  409577 kapi.go:59] client config for kubernetes-upgrade-001550: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key", CAFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c29b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 17:33:44.055556  409577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 17:33:44.064674  409577 api_server.go:166] Checking apiserver status ...
	I0216 17:33:44.064732  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:33:44.075044  409577 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:44.565351  409577 api_server.go:166] Checking apiserver status ...
	I0216 17:33:44.565440  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:33:44.576835  409577 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:41.586833  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:43.587106  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:45.326384  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:47.327738  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:45.065347  409577 api_server.go:166] Checking apiserver status ...
	I0216 17:33:45.065439  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:33:45.075923  409577 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:45.565338  409577 api_server.go:166] Checking apiserver status ...
	I0216 17:33:45.565427  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:33:45.577418  409577 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:46.065082  409577 api_server.go:166] Checking apiserver status ...
	I0216 17:33:46.065154  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:33:46.114392  409577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/13740/cgroup
	I0216 17:33:46.199214  409577 api_server.go:182] apiserver freezer: "7:freezer:/docker/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/kubepods/burstable/pod6d136c416382b7ba35dab7bbeece88b8/fd2f3b124dcfaa1c2e4958e97c131a1a772a743805d75ef658a0db60d859daed"
	I0216 17:33:46.199296  409577 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2e218160c1a8675a5d914a220acf6358a2834831b291ef9c538171df18caf1f7/kubepods/burstable/pod6d136c416382b7ba35dab7bbeece88b8/fd2f3b124dcfaa1c2e4958e97c131a1a772a743805d75ef658a0db60d859daed/freezer.state
	I0216 17:33:46.209757  409577 api_server.go:204] freezer state: "THAWED"
	I0216 17:33:46.209805  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:48.937363  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:33:48.937408  409577 retry.go:31] will retry after 246.905453ms: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:33:49.184962  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:49.196811  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:49.196849  409577 retry.go:31] will retry after 313.152755ms: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:49.510272  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:49.514478  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:49.514516  409577 retry.go:31] will retry after 326.807616ms: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:49.842123  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:49.846367  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:49.846411  409577 retry.go:31] will retry after 434.65534ms: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:46.086125  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:48.086892  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:50.087301  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:49.827453  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:52.327458  389765 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w2kjd" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:50.281609  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:50.285672  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0216 17:33:50.298909  409577 system_pods.go:86] 5 kube-system pods found
	I0216 17:33:50.298944  409577 system_pods.go:89] "etcd-kubernetes-upgrade-001550" [fa121eff-e36b-4e8f-9ef3-f8db70e6a48b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 17:33:50.298953  409577 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-001550" [28158c29-207b-4284-bb84-0d9c417c215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 17:33:50.298964  409577 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-001550" [fab839ae-ebd1-4902-a07f-c519b3967bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 17:33:50.298975  409577 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-001550" [0749dc19-7afc-4baa-b25a-9be2a9b294ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 17:33:50.298990  409577 system_pods.go:89] "storage-provisioner" [649f4ce0-8671-446a-9bd5-5d0782f0fdcb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 17:33:50.299000  409577 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0216 17:33:50.299017  409577 kubeadm.go:1135] stopping kube-system containers ...
	I0216 17:33:50.299061  409577 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:33:50.319179  409577 docker.go:483] Stopping containers: [da48efb7b10f 54b63cfc9039 4cde51b9e617 fd2f3b124dcf 85c4ac0846b2 9e998eb5ca20 2fa2a6d9cae8 e2ddf8aa8e5f 253f8ed0f784 6ae7591b8d39 e029be17d8d0 c15d2356f510 d295f2804c43 1a5c66158e8f eae859b43fef 065d0317b6a0]
	I0216 17:33:50.319248  409577 ssh_runner.go:195] Run: docker stop da48efb7b10f 54b63cfc9039 4cde51b9e617 fd2f3b124dcf 85c4ac0846b2 9e998eb5ca20 2fa2a6d9cae8 e2ddf8aa8e5f 253f8ed0f784 6ae7591b8d39 e029be17d8d0 c15d2356f510 d295f2804c43 1a5c66158e8f eae859b43fef 065d0317b6a0
	I0216 17:33:51.327039  409577 ssh_runner.go:235] Completed: docker stop da48efb7b10f 54b63cfc9039 4cde51b9e617 fd2f3b124dcf 85c4ac0846b2 9e998eb5ca20 2fa2a6d9cae8 e2ddf8aa8e5f 253f8ed0f784 6ae7591b8d39 e029be17d8d0 c15d2356f510 d295f2804c43 1a5c66158e8f eae859b43fef 065d0317b6a0: (1.007754289s)
	I0216 17:33:51.327119  409577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 17:33:51.429261  409577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:33:51.494023  409577 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 16 17:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 16 17:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 16 17:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 16 17:33 /etc/kubernetes/scheduler.conf
	
	I0216 17:33:51.494107  409577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 17:33:51.504808  409577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 17:33:51.516302  409577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 17:33:51.527901  409577 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:51.528018  409577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 17:33:51.594848  409577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 17:33:51.606042  409577 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:33:51.606112  409577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 17:33:51.615196  409577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:33:51.624421  409577 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 17:33:51.624458  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:51.670037  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:52.369297  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:52.519495  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:52.582467  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:52.696414  409577 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:33:52.696640  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:33:53.196699  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:33:53.696815  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:33:53.710376  409577 api_server.go:72] duration metric: took 1.01395312s to wait for apiserver process to appear ...
	I0216 17:33:53.710413  409577 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:33:53.710438  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:52.586350  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:54.587060  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-98wtm" in "kube-system" namespace has status "Ready":"False"
	I0216 17:33:55.808514  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 17:33:55.808542  409577 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:33:55.808559  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:55.905215  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 17:33:55.905259  409577 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:56.210687  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:56.214810  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 17:33:56.214835  409577 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:56.711458  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:56.715998  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 17:33:56.716030  409577 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:33:57.210564  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:57.214570  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0216 17:33:57.220970  409577 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 17:33:57.220998  409577 api_server.go:131] duration metric: took 3.510577084s to wait for apiserver health ...
	I0216 17:33:57.221007  409577 cni.go:84] Creating CNI manager for ""
	I0216 17:33:57.221019  409577 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:33:57.223271  409577 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 17:33:57.224817  409577 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 17:33:57.233874  409577 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 17:33:57.250905  409577 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:33:57.257519  409577 system_pods.go:59] 5 kube-system pods found
	I0216 17:33:57.257549  409577 system_pods.go:61] "etcd-kubernetes-upgrade-001550" [fa121eff-e36b-4e8f-9ef3-f8db70e6a48b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 17:33:57.257557  409577 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-001550" [28158c29-207b-4284-bb84-0d9c417c215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 17:33:57.257565  409577 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-001550" [fab839ae-ebd1-4902-a07f-c519b3967bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 17:33:57.257571  409577 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-001550" [0749dc19-7afc-4baa-b25a-9be2a9b294ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 17:33:57.257589  409577 system_pods.go:61] "storage-provisioner" [649f4ce0-8671-446a-9bd5-5d0782f0fdcb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 17:33:57.257597  409577 system_pods.go:74] duration metric: took 6.668714ms to wait for pod list to return data ...
	I0216 17:33:57.257604  409577 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:33:57.260567  409577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:33:57.260596  409577 node_conditions.go:123] node cpu capacity is 8
	I0216 17:33:57.260605  409577 node_conditions.go:105] duration metric: took 2.997233ms to run NodePressure ...
	I0216 17:33:57.260622  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:33:57.510885  409577 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 17:33:57.518496  409577 ops.go:34] apiserver oom_adj: -16
	I0216 17:33:57.518520  409577 kubeadm.go:640] restartCluster took 13.474728655s
	I0216 17:33:57.518540  409577 kubeadm.go:406] StartCluster complete in 13.50188402s
	I0216 17:33:57.518560  409577 settings.go:142] acquiring lock: {Name:mkc0445e63ab2bfc5d2d7306f3af19ca96df275c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:33:57.518637  409577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:33:57.519631  409577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:33:57.519884  409577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 17:33:57.519998  409577 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 17:33:57.520086  409577 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-001550"
	I0216 17:33:57.520097  409577 config.go:182] Loaded profile config "kubernetes-upgrade-001550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 17:33:57.520104  409577 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-001550"
	I0216 17:33:57.520122  409577 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-001550"
	W0216 17:33:57.520180  409577 addons.go:243] addon storage-provisioner should already be in state true
	I0216 17:33:57.520242  409577 host.go:66] Checking if "kubernetes-upgrade-001550" exists ...
	I0216 17:33:57.520129  409577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-001550"
	I0216 17:33:57.520679  409577 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:33:57.520742  409577 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:33:57.520794  409577 kapi.go:59] client config for kubernetes-upgrade-001550: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key", CAFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c29b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 17:33:57.524637  409577 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-001550" context rescaled to 1 replicas
	I0216 17:33:57.524694  409577 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:33:57.526662  409577 out.go:177] * Verifying Kubernetes components...
	I0216 17:33:57.528312  409577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:33:57.544859  409577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:33:57.543665  409577 kapi.go:59] client config for kubernetes-upgrade-001550: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubernetes-upgrade-001550/client.key", CAFile:"/home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c29b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 17:33:57.546684  409577 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:33:57.546706  409577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 17:33:57.546783  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:57.546882  409577 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-001550"
	W0216 17:33:57.546905  409577 addons.go:243] addon default-storageclass should already be in state true
	I0216 17:33:57.546935  409577 host.go:66] Checking if "kubernetes-upgrade-001550" exists ...
	I0216 17:33:57.547407  409577 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-001550 --format={{.State.Status}}
	I0216 17:33:57.565025  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:57.565895  409577 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 17:33:57.565977  409577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 17:33:57.566075  409577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-001550
	I0216 17:33:57.588756  409577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/kubernetes-upgrade-001550/id_rsa Username:docker}
	I0216 17:33:57.603861  409577 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:33:57.603891  409577 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0216 17:33:57.603949  409577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:33:57.615129  409577 api_server.go:72] duration metric: took 90.380148ms to wait for apiserver process to appear ...
	I0216 17:33:57.615157  409577 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:33:57.615178  409577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0216 17:33:57.619417  409577 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0216 17:33:57.620405  409577 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 17:33:57.620439  409577 api_server.go:131] duration metric: took 5.275142ms to wait for apiserver health ...
	I0216 17:33:57.620455  409577 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:33:57.625138  409577 system_pods.go:59] 5 kube-system pods found
	I0216 17:33:57.625169  409577 system_pods.go:61] "etcd-kubernetes-upgrade-001550" [fa121eff-e36b-4e8f-9ef3-f8db70e6a48b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 17:33:57.625180  409577 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-001550" [28158c29-207b-4284-bb84-0d9c417c215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 17:33:57.625197  409577 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-001550" [fab839ae-ebd1-4902-a07f-c519b3967bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 17:33:57.625217  409577 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-001550" [0749dc19-7afc-4baa-b25a-9be2a9b294ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 17:33:57.625226  409577 system_pods.go:61] "storage-provisioner" [649f4ce0-8671-446a-9bd5-5d0782f0fdcb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 17:33:57.625235  409577 system_pods.go:74] duration metric: took 4.771689ms to wait for pod list to return data ...
	I0216 17:33:57.625246  409577 kubeadm.go:581] duration metric: took 100.503199ms to wait for : map[apiserver:true system_pods:true] ...
	I0216 17:33:57.625265  409577 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:33:57.627965  409577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:33:57.627989  409577 node_conditions.go:123] node cpu capacity is 8
	I0216 17:33:57.628001  409577 node_conditions.go:105] duration metric: took 2.719505ms to run NodePressure ...
	I0216 17:33:57.628016  409577 start.go:228] waiting for startup goroutines ...
	I0216 17:33:57.678417  409577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:33:57.699452  409577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 17:33:58.314011  409577 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0216 17:33:58.315400  409577 addons.go:505] enable addons completed in 795.408906ms: enabled=[storage-provisioner default-storageclass]
	I0216 17:33:58.315443  409577 start.go:233] waiting for cluster config update ...
	I0216 17:33:58.315455  409577 start.go:242] writing updated cluster config ...
	I0216 17:33:58.315721  409577 ssh_runner.go:195] Run: rm -f paused
	I0216 17:33:58.366064  409577 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0216 17:33:58.368466  409577 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-001550" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:33:43 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:43Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 16 17:33:43 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 16 17:33:43 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:43Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 16 17:33:43 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:43Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 16 17:33:43 kubernetes-upgrade-001550 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 16 17:33:45 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/85c4ac0846b2cac15dc1acdfdc5a12031c05e2f7cd7d4a06bf1759a9360ac4f0/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:45 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2fa2a6d9cae838cea10c4ea9e52582e23b0333bce188fe1726eabf2fdee44687/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:45 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e998eb5ca20b2e74a4080be35c51cd24f153f82dd9db32d0416ad8e0d4fce07/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:45 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2ddf8aa8e5fd9ef9ec3a619dbaac3b9972b1805db509ff2eb97ca56ffbf1138/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.407385675Z" level=info msg="ignoring event" container=da48efb7b10f7e9e7a5c8806c89b70cc2a40c64c44a5e823b412eff65523a352 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.408506271Z" level=info msg="ignoring event" container=9e998eb5ca20b2e74a4080be35c51cd24f153f82dd9db32d0416ad8e0d4fce07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.409984183Z" level=info msg="ignoring event" container=85c4ac0846b2cac15dc1acdfdc5a12031c05e2f7cd7d4a06bf1759a9360ac4f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.410573865Z" level=info msg="ignoring event" container=2fa2a6d9cae838cea10c4ea9e52582e23b0333bce188fe1726eabf2fdee44687 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.411111528Z" level=info msg="ignoring event" container=e2ddf8aa8e5fd9ef9ec3a619dbaac3b9972b1805db509ff2eb97ca56ffbf1138 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.417826935Z" level=info msg="ignoring event" container=4cde51b9e617d5c091ee620b9beb261d7728a2210ea736e40c02217b0a5220ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:50 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:50.500043980Z" level=info msg="ignoring event" container=54b63cfc903957a015750a4d90a674ce83b3ddada59762868ceb32de0c9651b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:51 kubernetes-upgrade-001550 dockerd[12983]: time="2024-02-16T17:33:51.304408102Z" level=info msg="ignoring event" container=fd2f3b124dcfaa1c2e4958e97c131a1a772a743805d75ef658a0db60d859daed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3c96e2918b279eab405c63411220043e1d8a873cdedc6d4de00712bcd8ff893/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: W0216 17:33:51.515587   13203 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e7aa8aaac4779d0cc3cc89d1f80c89498036dc94b0ff61ac6a68ab789380a657/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: W0216 17:33:51.532416   13203 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8565cc9b573b1f3efd7f20d9eac311be98bfd48a37d2b53d68fe59a105db6f18/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: W0216 17:33:51.598825   13203 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: time="2024-02-16T17:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f39bde18b6550616ceff95c09a311c9111e499ef94a965838b2c6135c94b4761/resolv.conf as [nameserver 192.168.67.1 search europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Feb 16 17:33:51 kubernetes-upgrade-001550 cri-dockerd[13203]: W0216 17:33:51.610792   13203 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4c38247ef278       4270645ed6b7a       6 seconds ago       Running             kube-scheduler            2                   8565cc9b573b1       kube-scheduler-kubernetes-upgrade-001550
	8cee70322d750       d4e01cdf63970       6 seconds ago       Running             kube-controller-manager   2                   e7aa8aaac4779       kube-controller-manager-kubernetes-upgrade-001550
	2e4c516728ed9       bbb47a0f83324       6 seconds ago       Running             kube-apiserver            2                   f39bde18b6550       kube-apiserver-kubernetes-upgrade-001550
	dc4f3176ad1de       a0eed15eed449       6 seconds ago       Running             etcd                      2                   f3c96e2918b27       etcd-kubernetes-upgrade-001550
	da48efb7b10f7       d4e01cdf63970       14 seconds ago      Exited              kube-controller-manager   1                   e2ddf8aa8e5fd       kube-controller-manager-kubernetes-upgrade-001550
	54b63cfc90395       a0eed15eed449       14 seconds ago      Exited              etcd                      1                   9e998eb5ca20b       etcd-kubernetes-upgrade-001550
	4cde51b9e617d       4270645ed6b7a       14 seconds ago      Exited              kube-scheduler            1                   2fa2a6d9cae83       kube-scheduler-kubernetes-upgrade-001550
	fd2f3b124dcfa       bbb47a0f83324       14 seconds ago      Exited              kube-apiserver            1                   85c4ac0846b2c       kube-apiserver-kubernetes-upgrade-001550
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-001550
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-001550
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9
	                    minikube.k8s.io/name=kubernetes-upgrade-001550
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_16T17_33_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Feb 2024 17:33:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-001550
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Feb 2024 17:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Feb 2024 17:33:55 +0000   Fri, 16 Feb 2024 17:33:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Feb 2024 17:33:55 +0000   Fri, 16 Feb 2024 17:33:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Feb 2024 17:33:55 +0000   Fri, 16 Feb 2024 17:33:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Feb 2024 17:33:55 +0000   Fri, 16 Feb 2024 17:33:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-001550
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859376Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859376Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cc91e561e4a4e10a36e2f8138f1b007
	  System UUID:                acb36029-ab58-463c-9744-d5f201258c30
	  Boot ID:                    dc22470d-5531-4189-8394-1041aab066df
	  Kernel Version:             5.15.0-1051-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-001550                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-001550             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-001550    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-scheduler-kubernetes-upgrade-001550             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     31s                kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  31s                kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 31s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                kubelet  Node kubernetes-upgrade-001550 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6s (x8 over 7s)    kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6s (x8 over 7s)    kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6s (x7 over 7s)    kubelet  Node kubernetes-upgrade-001550 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 d4 2f 48 36 8b 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 71 86 78 fb 93 08 06
	[  +5.109567] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 20 ea ea 22 c7 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 42 98 c2 e9 5d 08 06
	[Feb16 17:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 cd 6f f8 69 9c 08 06
	[  +0.000442] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 99 1a 54 c9 bf 08 06
	[ +20.744126] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cbr0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 3d 45 82 2d 52 08 06
	[  +0.401169] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 3d 45 82 2d 52 08 06
	[ +13.099274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 77 26 e2 05 f8 08 06
	[  +0.000387] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 3d 45 82 2d 52 08 06
	[Feb16 17:29] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e a8 fe f3 03 85 08 06
	[Feb16 17:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba d4 5b d6 50 19 08 06
	[Feb16 17:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 c0 9b 14 00 15 08 06
	
	
	==> etcd [54b63cfc9039] <==
	{"level":"info","ts":"2024-02-16T17:33:47.896213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-16T17:33:47.896292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-16T17:33:47.896308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-16T17:33:47.89632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:47.896325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:47.896333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:47.89634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:47.897968Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:33:47.897998Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:33:47.897969Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-001550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:33:47.89823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:33:47.898259Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:33:47.900063Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:33:47.900227Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-16T17:33:50.355478Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-16T17:33:50.35557Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-001550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-02-16T17:33:50.355665Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:33:50.355762Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/02/16 17:33:50 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-02-16T17:33:50.401525Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:33:50.401629Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-16T17:33:50.401739Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-02-16T17:33:50.405544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:33:50.405752Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:33:50.405809Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-001550","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> etcd [dc4f3176ad1d] <==
	{"level":"info","ts":"2024-02-16T17:33:53.502479Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:33:53.502495Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:33:53.50262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-16T17:33:53.502706Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-16T17:33:53.50281Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:33:53.50285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:33:53.507592Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-16T17:33:53.507866Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-16T17:33:53.507905Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-16T17:33:53.508055Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:33:53.508072Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:33:54.593066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:54.593174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:54.593198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:33:54.593213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-02-16T17:33:54.593221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-16T17:33:54.593232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-02-16T17:33:54.593242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-16T17:33:54.594439Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-001550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:33:54.594492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:33:54.594512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:33:54.594726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:33:54.594801Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:33:54.596989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:33:54.597488Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 17:33:59 up  1:16,  0 users,  load average: 1.42, 2.42, 2.30
	Linux kubernetes-upgrade-001550 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [2e4c516728ed] <==
	I0216 17:33:55.803165       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0216 17:33:55.803187       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0216 17:33:55.801220       1 controller.go:78] Starting OpenAPI AggregationController
	I0216 17:33:55.803667       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0216 17:33:55.803763       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0216 17:33:55.805687       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0216 17:33:55.892760       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0216 17:33:55.897490       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0216 17:33:55.901657       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0216 17:33:55.903105       1 shared_informer.go:318] Caches are synced for configmaps
	I0216 17:33:55.903386       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0216 17:33:55.903428       1 aggregator.go:165] initial CRD sync complete...
	I0216 17:33:55.903436       1 autoregister_controller.go:141] Starting autoregister controller
	I0216 17:33:55.903477       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0216 17:33:55.903483       1 cache.go:39] Caches are synced for autoregister controller
	I0216 17:33:55.903938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0216 17:33:55.905140       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0216 17:33:55.905160       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0216 17:33:55.905253       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0216 17:33:56.805195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0216 17:33:57.343089       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0216 17:33:57.352821       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0216 17:33:57.378784       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0216 17:33:57.401227       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0216 17:33:57.407822       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [fd2f3b124dcf] <==
	I0216 17:33:50.392449       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0216 17:33:50.392895       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0216 17:33:50.392963       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0216 17:33:50.393169       1 controller.go:159] Shutting down quota evaluator
	I0216 17:33:50.393195       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:33:50.393325       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0216 17:33:50.393345       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0216 17:33:50.393703       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0216 17:33:50.393873       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:33:50.393892       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:33:50.393905       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:33:50.393924       1 controller.go:178] quota evaluator worker shutdown
	W0216 17:33:50.394061       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394121       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394214       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394265       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394323       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394371       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394557       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394735       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394789       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.394963       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:33:50.395006       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0216 17:33:50.395654       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0216 17:33:50.395725       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8cee70322d75] <==
	I0216 17:33:58.018170       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0216 17:33:58.018314       1 expand_controller.go:328] "Starting expand controller"
	I0216 17:33:58.018331       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0216 17:33:58.020108       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0216 17:33:58.020252       1 controller.go:169] "Starting ephemeral volume controller"
	I0216 17:33:58.020282       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0216 17:33:58.021963       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0216 17:33:58.022055       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0216 17:33:58.022078       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0216 17:33:58.027811       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0216 17:33:58.027834       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0216 17:33:58.027972       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0216 17:33:58.027989       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0216 17:33:58.031008       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0216 17:33:58.031204       1 job_controller.go:224] "Starting job controller"
	I0216 17:33:58.031214       1 shared_informer.go:311] Waiting for caches to sync for job
	I0216 17:33:58.110049       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0216 17:33:58.110152       1 disruption.go:433] "Sending events to api server."
	I0216 17:33:58.110188       1 disruption.go:444] "Starting disruption controller"
	I0216 17:33:58.110199       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0216 17:33:58.159497       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0216 17:33:58.159559       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0216 17:33:58.210106       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0216 17:33:58.210169       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0216 17:33:58.210178       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	
	
	==> kube-controller-manager [da48efb7b10f] <==
	I0216 17:33:46.696227       1 serving.go:380] Generated self-signed cert in-memory
	I0216 17:33:47.449344       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0216 17:33:47.449370       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:33:47.450416       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0216 17:33:47.450415       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0216 17:33:47.450883       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0216 17:33:47.450924       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-scheduler [4cde51b9e617] <==
	I0216 17:33:46.650451       1 serving.go:380] Generated self-signed cert in-memory
	W0216 17:33:48.997664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0216 17:33:48.997711       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0216 17:33:48.997724       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0216 17:33:48.997733       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0216 17:33:49.016992       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:33:49.017024       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:33:49.018670       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0216 17:33:49.019005       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:33:49.019026       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:33:49.019055       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:33:49.119651       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:33:50.361164       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0216 17:33:50.361257       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0216 17:33:50.361420       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0216 17:33:50.361627       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f4c38247ef27] <==
	I0216 17:33:54.250787       1 serving.go:380] Generated self-signed cert in-memory
	W0216 17:33:55.892889       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0216 17:33:55.892947       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0216 17:33:55.892963       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0216 17:33:55.892974       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0216 17:33:55.909769       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:33:55.909796       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:33:55.911059       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:33:55.911102       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:33:55.911757       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0216 17:33:55.911800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:33:56.012261       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128374   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d136c416382b7ba35dab7bbeece88b8-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-001550\" (UID: \"6d136c416382b7ba35dab7bbeece88b8\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128440   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128474   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128493   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d136c416382b7ba35dab7bbeece88b8-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-001550\" (UID: \"6d136c416382b7ba35dab7bbeece88b8\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128523   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d136c416382b7ba35dab7bbeece88b8-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-001550\" (UID: \"6d136c416382b7ba35dab7bbeece88b8\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128546   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128566   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128592   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.128624   14389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b1d5479eabeafc603ce08265da8f06-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-001550\" (UID: \"a0b1d5479eabeafc603ce08265da8f06\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: E0216 17:33:53.228911   14389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-001550?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.318055   14389 scope.go:117] "RemoveContainer" containerID="54b63cfc903957a015750a4d90a674ce83b3ddada59762868ceb32de0c9651b9"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.325041   14389 scope.go:117] "RemoveContainer" containerID="fd2f3b124dcfaa1c2e4958e97c131a1a772a743805d75ef658a0db60d859daed"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.332783   14389 scope.go:117] "RemoveContainer" containerID="da48efb7b10f7e9e7a5c8806c89b70cc2a40c64c44a5e823b412eff65523a352"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.414080   14389 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: E0216 17:33:53.414589   14389 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-001550"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.826950   14389 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa2a6d9cae838cea10c4ea9e52582e23b0333bce188fe1726eabf2fdee44687"
	Feb 16 17:33:53 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:53.835147   14389 scope.go:117] "RemoveContainer" containerID="4cde51b9e617d5c091ee620b9beb261d7728a2210ea736e40c02217b0a5220ca"
	Feb 16 17:33:54 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:54.222268   14389 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-001550"
	Feb 16 17:33:55 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:55.919311   14389 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-001550"
	Feb 16 17:33:55 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:55.919440   14389 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-001550"
	Feb 16 17:33:56 kubernetes-upgrade-001550 kubelet[14389]: E0216 17:33:56.002136   14389 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-001550\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-001550"
	Feb 16 17:33:56 kubernetes-upgrade-001550 kubelet[14389]: E0216 17:33:56.002141   14389 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-001550\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-001550"
	Feb 16 17:33:56 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:56.617187   14389 apiserver.go:52] "Watching apiserver"
	Feb 16 17:33:56 kubernetes-upgrade-001550 kubelet[14389]: I0216 17:33:56.727232   14389 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 16 17:33:56 kubernetes-upgrade-001550 kubelet[14389]: E0216 17:33:56.939351   14389 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-001550\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-001550"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-001550 -n kubernetes-upgrade-001550
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-001550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-001550 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-001550 describe pod storage-provisioner: exit status 1 (66.841242ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-001550 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-001550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-001550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-001550: (2.307165303s)
--- FAIL: TestKubernetesUpgrade (824.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (504.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (8m24.198424507s)

                                                
                                                
-- stdout --
	* [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:36:03 old-k8s-version-478853 kubelet[5716]: E0216 17:36:03.828321    5716 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:36:04 old-k8s-version-478853 kubelet[5716]: E0216 17:36:04.833813    5716 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:36:06 old-k8s-version-478853 kubelet[5716]: E0216 17:36:06.833227    5716 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:28:01.011410  354075 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:28:01.012556  354075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:28:01.012564  354075 out.go:304] Setting ErrFile to fd 2...
	I0216 17:28:01.012581  354075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:28:01.012935  354075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:28:01.013829  354075 out.go:298] Setting JSON to false
	I0216 17:28:01.016230  354075 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4227,"bootTime":1708100254,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:28:01.016336  354075 start.go:139] virtualization: kvm guest
	I0216 17:28:01.018836  354075 out.go:177] * [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:28:01.020814  354075 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:28:01.020849  354075 notify.go:220] Checking for updates...
	I0216 17:28:01.022511  354075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:28:01.024034  354075 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:28:01.025615  354075 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:28:01.027106  354075 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:28:01.028490  354075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:28:01.030716  354075 config.go:182] Loaded profile config "bridge-123826": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:28:01.030890  354075 config.go:182] Loaded profile config "kubenet-123826": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:28:01.031023  354075 config.go:182] Loaded profile config "kubernetes-upgrade-001550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:28:01.031144  354075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:28:01.073375  354075 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:28:01.073476  354075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:28:01.163797  354075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:87 SystemTime:2024-02-16 17:28:01.153733197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:28:01.163916  354075 docker.go:295] overlay module found
	I0216 17:28:01.165860  354075 out.go:177] * Using the docker driver based on user configuration
	I0216 17:28:01.167828  354075 start.go:299] selected driver: docker
	I0216 17:28:01.167849  354075 start.go:903] validating driver "docker" against <nil>
	I0216 17:28:01.167865  354075 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:28:01.169071  354075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:28:01.247516  354075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:87 SystemTime:2024-02-16 17:28:01.235339949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:28:01.247725  354075 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 17:28:01.247968  354075 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:28:01.250145  354075 out.go:177] * Using Docker driver with root privileges
	I0216 17:28:01.251626  354075 cni.go:84] Creating CNI manager for ""
	I0216 17:28:01.251658  354075 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:28:01.251671  354075 start_flags.go:323] config:
	{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:28:01.253389  354075 out.go:177] * Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	I0216 17:28:01.254905  354075 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:28:01.256610  354075 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:28:01.258317  354075 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:28:01.258345  354075 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:28:01.258364  354075 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 17:28:01.258373  354075 cache.go:56] Caching tarball of preloaded images
	I0216 17:28:01.258470  354075 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:28:01.258478  354075 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:28:01.258580  354075 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:28:01.258598  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json: {Name:mk1ad9eb4deeb05d969d775f06d57fdf99173ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:01.277181  354075 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:28:01.277211  354075 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:28:01.277230  354075 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:28:01.277259  354075 start.go:365] acquiring machines lock for old-k8s-version-478853: {Name:mkde5e52743909de9e75497b3ed0dd80f14fc0ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:28:01.277360  354075 start.go:369] acquired machines lock for "old-k8s-version-478853" in 82.29µs
	I0216 17:28:01.277389  354075 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:28:01.277463  354075 start.go:125] createHost starting for "" (driver="docker")
	I0216 17:28:01.279701  354075 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 17:28:01.279952  354075 start.go:159] libmachine.API.Create for "old-k8s-version-478853" (driver="docker")
	I0216 17:28:01.279981  354075 client.go:168] LocalClient.Create starting
	I0216 17:28:01.280050  354075 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem
	I0216 17:28:01.280080  354075 main.go:141] libmachine: Decoding PEM data...
	I0216 17:28:01.280101  354075 main.go:141] libmachine: Parsing certificate...
	I0216 17:28:01.280167  354075 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem
	I0216 17:28:01.280194  354075 main.go:141] libmachine: Decoding PEM data...
	I0216 17:28:01.280211  354075 main.go:141] libmachine: Parsing certificate...
	I0216 17:28:01.280541  354075 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 17:28:01.301192  354075 cli_runner.go:211] docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 17:28:01.301276  354075 network_create.go:281] running [docker network inspect old-k8s-version-478853] to gather additional debugging logs...
	I0216 17:28:01.301303  354075 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853
	W0216 17:28:01.322261  354075 cli_runner.go:211] docker network inspect old-k8s-version-478853 returned with exit code 1
	I0216 17:28:01.322295  354075 network_create.go:284] error running [docker network inspect old-k8s-version-478853]: docker network inspect old-k8s-version-478853: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-478853 not found
	I0216 17:28:01.322319  354075 network_create.go:286] output of [docker network inspect old-k8s-version-478853]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-478853 not found
	
	** /stderr **
	I0216 17:28:01.322429  354075 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:28:01.342630  354075 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c4eff5c28743 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:d1:63:57} reservation:<nil>}
	I0216 17:28:01.343667  354075 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-da77939dda2e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:54:6a:64:c8} reservation:<nil>}
	I0216 17:28:01.344712  354075 network.go:212] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05405816cf1b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:5b:78:51:34} reservation:<nil>}
	I0216 17:28:01.345910  354075 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002933bb0}
	I0216 17:28:01.345989  354075 network_create.go:124] attempt to create docker network old-k8s-version-478853 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0216 17:28:01.346061  354075 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-478853 old-k8s-version-478853
	I0216 17:28:01.426365  354075 network_create.go:108] docker network old-k8s-version-478853 192.168.76.0/24 created
	I0216 17:28:01.426400  354075 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-478853" container
	I0216 17:28:01.426478  354075 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 17:28:01.446138  354075 cli_runner.go:164] Run: docker volume create old-k8s-version-478853 --label name.minikube.sigs.k8s.io=old-k8s-version-478853 --label created_by.minikube.sigs.k8s.io=true
	I0216 17:28:01.468025  354075 oci.go:103] Successfully created a docker volume old-k8s-version-478853
	I0216 17:28:01.468111  354075 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-478853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-478853 --entrypoint /usr/bin/test -v old-k8s-version-478853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 17:28:02.032243  354075 oci.go:107] Successfully prepared a docker volume old-k8s-version-478853
	I0216 17:28:02.032288  354075 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:28:02.032311  354075 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 17:28:02.032382  354075 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-478853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 17:28:05.263760  354075 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-478853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (3.231315763s)
	I0216 17:28:05.263797  354075 kic.go:203] duration metric: took 3.231482 seconds to extract preloaded images to volume
	W0216 17:28:05.263938  354075 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 17:28:05.264056  354075 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 17:28:05.326810  354075 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-478853 --name old-k8s-version-478853 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-478853 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-478853 --network old-k8s-version-478853 --ip 192.168.76.2 --volume old-k8s-version-478853:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 17:28:05.706521  354075 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Running}}
	I0216 17:28:05.726476  354075 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:28:05.747559  354075 cli_runner.go:164] Run: docker exec old-k8s-version-478853 stat /var/lib/dpkg/alternatives/iptables
	I0216 17:28:05.792639  354075 oci.go:144] the created container "old-k8s-version-478853" has a running status.
	I0216 17:28:05.792677  354075 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa...
	I0216 17:28:05.975348  354075 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 17:28:06.004626  354075 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:28:06.025303  354075 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 17:28:06.025340  354075 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-478853 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 17:28:06.078615  354075 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:28:06.102241  354075 machine.go:88] provisioning docker machine ...
	I0216 17:28:06.102284  354075 ubuntu.go:169] provisioning hostname "old-k8s-version-478853"
	I0216 17:28:06.102350  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:06.130820  354075 main.go:141] libmachine: Using SSH client type: native
	I0216 17:28:06.131376  354075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0216 17:28:06.131403  354075 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-478853 && echo "old-k8s-version-478853" | sudo tee /etc/hostname
	I0216 17:28:06.132171  354075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36092->127.0.0.1:33052: read: connection reset by peer
	I0216 17:28:09.285249  354075 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-478853
	
	I0216 17:28:09.285407  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:09.306312  354075 main.go:141] libmachine: Using SSH client type: native
	I0216 17:28:09.306715  354075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0216 17:28:09.306743  354075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-478853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-478853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-478853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:28:09.444951  354075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:28:09.444991  354075 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:28:09.445031  354075 ubuntu.go:177] setting up certificates
	I0216 17:28:09.445053  354075 provision.go:83] configureAuth start
	I0216 17:28:09.445130  354075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:28:09.463005  354075 provision.go:138] copyHostCerts
	I0216 17:28:09.463076  354075 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:28:09.463092  354075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:28:09.463185  354075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:28:09.463334  354075 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:28:09.463351  354075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:28:09.463394  354075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:28:09.463479  354075 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:28:09.463492  354075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:28:09.463527  354075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:28:09.463604  354075 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-478853 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-478853]
	I0216 17:28:09.615979  354075 provision.go:172] copyRemoteCerts
	I0216 17:28:09.616038  354075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:28:09.616071  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:09.633687  354075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:28:09.730017  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:28:09.753281  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:28:09.776282  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0216 17:28:09.798788  354075 provision.go:86] duration metric: configureAuth took 353.722077ms
	I0216 17:28:09.798821  354075 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:28:09.799019  354075 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:28:09.799087  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:09.817140  354075 main.go:141] libmachine: Using SSH client type: native
	I0216 17:28:09.817673  354075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0216 17:28:09.817706  354075 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:28:09.948612  354075 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:28:09.948640  354075 ubuntu.go:71] root file system type: overlay
	I0216 17:28:09.948807  354075 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:28:09.948880  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:09.967168  354075 main.go:141] libmachine: Using SSH client type: native
	I0216 17:28:09.967526  354075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0216 17:28:09.967592  354075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:28:10.111358  354075 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:28:10.111444  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:10.128916  354075 main.go:141] libmachine: Using SSH client type: native
	I0216 17:28:10.129289  354075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0216 17:28:10.129315  354075 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:28:10.834438  354075 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:28:10.106751193 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 17:28:10.834474  354075 machine.go:91] provisioned docker machine in 4.73220419s
	I0216 17:28:10.834486  354075 client.go:171] LocalClient.Create took 9.554499734s
	I0216 17:28:10.834507  354075 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-478853" took 9.554554941s
	I0216 17:28:10.834517  354075 start.go:300] post-start starting for "old-k8s-version-478853" (driver="docker")
	I0216 17:28:10.834530  354075 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:28:10.834592  354075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:28:10.834635  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:10.851790  354075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:28:10.945524  354075 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:28:10.948721  354075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:28:10.948751  354075 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:28:10.948759  354075 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:28:10.948766  354075 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:28:10.948775  354075 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:28:10.948827  354075 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:28:10.948898  354075 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:28:10.948980  354075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:28:10.957100  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:28:10.978674  354075 start.go:303] post-start completed in 144.144874ms
	I0216 17:28:10.979011  354075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:28:10.996978  354075 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:28:10.997256  354075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:28:10.997301  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:11.015848  354075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:28:11.105027  354075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:28:11.109394  354075 start.go:128] duration metric: createHost completed in 9.831914815s
	I0216 17:28:11.109423  354075 start.go:83] releasing machines lock for "old-k8s-version-478853", held for 9.832051827s
	I0216 17:28:11.109490  354075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:28:11.126920  354075 ssh_runner.go:195] Run: cat /version.json
	I0216 17:28:11.126967  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:11.127026  354075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:28:11.127099  354075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:28:11.145367  354075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:28:11.145408  354075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:28:11.235799  354075 ssh_runner.go:195] Run: systemctl --version
	I0216 17:28:11.326547  354075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 17:28:11.331384  354075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 17:28:11.355367  354075 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 17:28:11.355451  354075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:28:11.371591  354075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:28:11.387881  354075 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 17:28:11.387911  354075 start.go:475] detecting cgroup driver to use...
	I0216 17:28:11.387944  354075 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:28:11.388060  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:28:11.404348  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:28:11.413528  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:28:11.422614  354075 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:28:11.422679  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:28:11.431472  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:28:11.440310  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:28:11.448955  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:28:11.457681  354075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:28:11.465746  354075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:28:11.474698  354075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:28:11.481913  354075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:28:11.489035  354075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:28:11.575688  354075 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:28:11.673847  354075 start.go:475] detecting cgroup driver to use...
	I0216 17:28:11.673900  354075 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:28:11.673953  354075 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:28:11.686614  354075 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:28:11.686716  354075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:28:11.699498  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:28:11.717251  354075 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:28:11.720809  354075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:28:11.729549  354075 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:28:11.746817  354075 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:28:11.843006  354075 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:28:11.938476  354075 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:28:11.938623  354075 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:28:11.956099  354075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:28:12.036315  354075 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:28:12.311329  354075 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:28:12.337058  354075 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:28:12.362639  354075 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:28:12.362741  354075 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:28:12.381052  354075 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 17:28:12.385275  354075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:28:12.395833  354075 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:28:12.395901  354075 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:28:12.413961  354075 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:28:12.413981  354075 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:28:12.414020  354075 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:28:12.422616  354075 ssh_runner.go:195] Run: which lz4
	I0216 17:28:12.426100  354075 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:28:12.429269  354075 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:28:12.429297  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 17:28:13.459945  354075 docker.go:649] Took 1.033867 seconds to copy over tarball
	I0216 17:28:13.460006  354075 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:28:16.062052  354075 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602007284s)
	I0216 17:28:16.062126  354075 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:28:16.124290  354075 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:28:16.133354  354075 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:28:16.150485  354075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:28:16.243378  354075 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:28:18.190622  354075 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.947201235s)
	I0216 17:28:18.190796  354075 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:28:18.213587  354075 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:28:18.213610  354075 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:28:18.213619  354075 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:28:18.215273  354075 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:28:18.215481  354075 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:28:18.215537  354075 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:28:18.215678  354075 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:28:18.215684  354075 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:28:18.215741  354075 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:28:18.215827  354075 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:28:18.215851  354075 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:28:18.216033  354075 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:28:18.216215  354075 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:28:18.216239  354075 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:28:18.216216  354075 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:28:18.216392  354075 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:28:18.216439  354075 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:28:18.216502  354075 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:28:18.216558  354075 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:28:18.389616  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:28:18.408313  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:28:18.411378  354075 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 17:28:18.411429  354075 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:28:18.411470  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:28:18.417719  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:28:18.430497  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:28:18.431364  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:28:18.432824  354075 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 17:28:18.432904  354075 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:28:18.432983  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:28:18.434185  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 17:28:18.443205  354075 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 17:28:18.443260  354075 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:28:18.443311  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:28:18.455129  354075 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 17:28:18.455194  354075 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:28:18.455247  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:28:18.456447  354075 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 17:28:18.456529  354075 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:28:18.456583  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:28:18.456685  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:28:18.501630  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:28:18.511511  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 17:28:18.511594  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:28:18.563741  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:28:18.583408  354075 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 17:28:18.583464  354075 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:28:18.583513  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:28:18.604187  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:28:18.620479  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:28:18.639878  354075 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 17:28:18.639932  354075 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:28:18.639979  354075 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:28:18.659922  354075 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:28:19.006502  354075 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:28:19.027017  354075 cache_images.go:92] LoadImages completed in 813.381295ms
	W0216 17:28:19.027164  354075 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:28:19.027333  354075 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:28:19.080531  354075 cni.go:84] Creating CNI manager for ""
	I0216 17:28:19.080560  354075 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:28:19.080579  354075 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:28:19.080602  354075 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-478853 NodeName:old-k8s-version-478853 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:28:19.080735  354075 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-478853"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-478853
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:28:19.080801  354075 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-478853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:28:19.080845  354075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:28:19.089360  354075 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:28:19.089416  354075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:28:19.097743  354075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:28:19.115094  354075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:28:19.134475  354075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:28:19.153896  354075 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:28:19.157246  354075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:28:19.171130  354075 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853 for IP: 192.168.76.2
	I0216 17:28:19.171166  354075 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.171327  354075 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:28:19.171383  354075 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:28:19.171439  354075 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key
	I0216 17:28:19.171452  354075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.crt with IP's: []
	I0216 17:28:19.290211  354075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.crt ...
	I0216 17:28:19.290241  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.crt: {Name:mk7f62e86590b8c679337b3895444ae16a224c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.290411  354075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key ...
	I0216 17:28:19.290428  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key: {Name:mka667bb270af222937e6ce60939ffdb207e7e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.290525  354075 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25
	I0216 17:28:19.290544  354075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 17:28:19.383987  354075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt.31bdca25 ...
	I0216 17:28:19.384020  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt.31bdca25: {Name:mk53eb1fd312e073250b7fe76ef4b7bd700c74ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.384207  354075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25 ...
	I0216 17:28:19.384235  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25: {Name:mk89de4481d7a32f70f330f3ed2d95c19f39fc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.384327  354075 certs.go:337] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt
	I0216 17:28:19.384391  354075 certs.go:341] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key
	I0216 17:28:19.384438  354075 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key
	I0216 17:28:19.384452  354075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt with IP's: []
	I0216 17:28:19.491703  354075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt ...
	I0216 17:28:19.491734  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt: {Name:mk28d5d76846b3a8f5a310c7e9c469e3fa2f3a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.491906  354075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key ...
	I0216 17:28:19.491924  354075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key: {Name:mk5e160cbceb7333532b8d009d56bd5243cb7d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:28:19.492108  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:28:19.492176  354075 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:28:19.492209  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:28:19.492247  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:28:19.492289  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:28:19.492324  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:28:19.492395  354075 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:28:19.493026  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:28:19.518158  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 17:28:19.542084  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:28:19.566805  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 17:28:19.594133  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:28:19.619793  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:28:19.652833  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:28:19.682739  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:28:19.707405  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:28:19.730835  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:28:19.755808  354075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:28:19.782159  354075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:28:19.799828  354075 ssh_runner.go:195] Run: openssl version
	I0216 17:28:19.805439  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:28:19.815547  354075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:28:19.819145  354075 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:28:19.819192  354075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:28:19.825987  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:28:19.835186  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:28:19.844924  354075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:28:19.848229  354075 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:28:19.848293  354075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:28:19.854600  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:28:19.863466  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:28:19.872672  354075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:28:19.875826  354075 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:28:19.875879  354075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:28:19.882063  354075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:28:19.890858  354075 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:28:19.894261  354075 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 17:28:19.894334  354075 kubeadm.go:404] StartCluster: {Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:28:19.894524  354075 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:28:19.911810  354075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:28:19.920543  354075 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:28:19.929606  354075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:28:19.929699  354075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:28:19.938450  354075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:28:19.938491  354075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:28:19.993075  354075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:28:19.993338  354075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:28:20.190335  354075 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:28:20.190419  354075 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:28:20.190484  354075 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:28:20.190532  354075 kubeadm.go:322] OS: Linux
	I0216 17:28:20.190593  354075 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:28:20.190661  354075 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:28:20.190732  354075 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:28:20.190810  354075 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:28:20.190878  354075 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:28:20.190945  354075 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:28:20.287337  354075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:28:20.287474  354075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:28:20.287638  354075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:28:20.504570  354075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:28:20.505180  354075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:28:20.514204  354075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:28:20.606927  354075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:28:20.609215  354075 out.go:204]   - Generating certificates and keys ...
	I0216 17:28:20.609322  354075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:28:20.610009  354075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:28:20.887871  354075 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 17:28:20.989434  354075 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 17:28:21.264754  354075 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 17:28:21.534612  354075 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 17:28:21.784325  354075 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 17:28:21.784496  354075 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 17:28:21.936895  354075 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 17:28:21.937056  354075 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 17:28:22.092848  354075 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 17:28:22.262825  354075 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 17:28:22.384802  354075 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 17:28:22.385001  354075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:28:22.450768  354075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:28:22.641155  354075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:28:22.717862  354075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:28:22.828946  354075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:28:22.829950  354075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:28:22.831882  354075 out.go:204]   - Booting up control plane ...
	I0216 17:28:22.832014  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:28:22.836593  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:28:22.837646  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:28:22.839641  354075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:28:22.841892  354075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:29:02.842106  354075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:32:22.843255  354075 kubeadm.go:322] 
	I0216 17:32:22.843339  354075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:32:22.843400  354075 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:32:22.843427  354075 kubeadm.go:322] 
	I0216 17:32:22.843480  354075 kubeadm.go:322] This error is likely caused by:
	I0216 17:32:22.843514  354075 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:32:22.843649  354075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:32:22.843674  354075 kubeadm.go:322] 
	I0216 17:32:22.843798  354075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:32:22.843842  354075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:32:22.843873  354075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:32:22.843879  354075 kubeadm.go:322] 
	I0216 17:32:22.843966  354075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:32:22.844089  354075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:32:22.844249  354075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:32:22.844311  354075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:32:22.844394  354075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:32:22.844425  354075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:32:22.846928  354075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:32:22.847077  354075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:32:22.847279  354075 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:32:22.847435  354075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:32:22.847514  354075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:32:22.847570  354075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 17:32:22.847728  354075 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-478853 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:32:22.847798  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:32:23.619026  354075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:32:23.632000  354075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:32:23.632066  354075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:32:23.640732  354075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:32:23.640799  354075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:32:23.694213  354075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:32:23.694278  354075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:32:23.876136  354075 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:32:23.876293  354075 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:32:23.876375  354075 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:32:23.876422  354075 kubeadm.go:322] OS: Linux
	I0216 17:32:23.876497  354075 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:32:23.876563  354075 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:32:23.876647  354075 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:32:23.876696  354075 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:32:23.876773  354075 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:32:23.876844  354075 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:32:23.953503  354075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:32:23.953635  354075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:32:23.953796  354075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:32:24.136932  354075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:32:24.139148  354075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:32:24.147005  354075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:32:24.225711  354075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:32:24.227948  354075 out.go:204]   - Generating certificates and keys ...
	I0216 17:32:24.228117  354075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:32:24.228230  354075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:32:24.228332  354075 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:32:24.228436  354075 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:32:24.228547  354075 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:32:24.228616  354075 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:32:24.228699  354075 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:32:24.228782  354075 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:32:24.228908  354075 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:32:24.229013  354075 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:32:24.229066  354075 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:32:24.229137  354075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:32:24.350573  354075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:32:24.539801  354075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:32:24.596237  354075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:32:24.732946  354075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:32:24.733959  354075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:32:24.735941  354075 out.go:204]   - Booting up control plane ...
	I0216 17:32:24.736069  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:32:24.742690  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:32:24.744111  354075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:32:24.745089  354075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:32:24.747884  354075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:33:04.748145  354075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:36:24.749312  354075 kubeadm.go:322] 
	I0216 17:36:24.749414  354075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:36:24.749466  354075 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:36:24.749472  354075 kubeadm.go:322] 
	I0216 17:36:24.749508  354075 kubeadm.go:322] This error is likely caused by:
	I0216 17:36:24.749540  354075 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:36:24.749654  354075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:36:24.749662  354075 kubeadm.go:322] 
	I0216 17:36:24.749813  354075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:36:24.749883  354075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:36:24.749920  354075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:36:24.749930  354075 kubeadm.go:322] 
	I0216 17:36:24.750073  354075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:36:24.750199  354075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:36:24.750291  354075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:36:24.750374  354075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:36:24.750475  354075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:36:24.750512  354075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:36:24.753234  354075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:36:24.753398  354075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:36:24.753670  354075 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:36:24.753780  354075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:36:24.753848  354075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:36:24.753910  354075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:36:24.754138  354075 kubeadm.go:406] StartCluster complete in 8m4.859809472s
	I0216 17:36:24.754281  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:36:24.775940  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.775979  354075 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:36:24.776060  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:36:24.796468  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.796493  354075 logs.go:278] No container was found matching "etcd"
	I0216 17:36:24.796558  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:36:24.815993  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.816014  354075 logs.go:278] No container was found matching "coredns"
	I0216 17:36:24.816071  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:36:24.835724  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.835753  354075 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:36:24.835827  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:36:24.855999  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.856023  354075 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:36:24.856078  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:36:24.877784  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.877812  354075 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:36:24.877865  354075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:36:24.895310  354075 logs.go:276] 0 containers: []
	W0216 17:36:24.895330  354075 logs.go:278] No container was found matching "kindnet"
	I0216 17:36:24.895340  354075 logs.go:123] Gathering logs for kubelet ...
	I0216 17:36:24.895354  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:36:24.921097  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:03 old-k8s-version-478853 kubelet[5716]: E0216 17:36:03.828321    5716 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:36:24.923305  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:04 old-k8s-version-478853 kubelet[5716]: E0216 17:36:04.833813    5716 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:36:24.927005  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:06 old-k8s-version-478853 kubelet[5716]: E0216 17:36:06.833227    5716 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:36:24.929415  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:07 old-k8s-version-478853 kubelet[5716]: E0216 17:36:07.822892    5716 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:36:24.940745  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:14 old-k8s-version-478853 kubelet[5716]: E0216 17:36:14.821265    5716 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:36:24.946152  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:17 old-k8s-version-478853 kubelet[5716]: E0216 17:36:17.828884    5716 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:36:24.948239  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:18 old-k8s-version-478853 kubelet[5716]: E0216 17:36:18.845816    5716 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:36:24.948726  354075 logs.go:138] Found kubelet problem: Feb 16 17:36:18 old-k8s-version-478853 kubelet[5716]: E0216 17:36:18.848783    5716 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:36:24.957722  354075 logs.go:123] Gathering logs for dmesg ...
	I0216 17:36:24.957748  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:36:24.983071  354075 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:36:24.983168  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:36:25.062664  354075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:36:25.062689  354075 logs.go:123] Gathering logs for Docker ...
	I0216 17:36:25.062699  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:36:25.082257  354075 logs.go:123] Gathering logs for container status ...
	I0216 17:36:25.082296  354075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 17:36:25.120328  354075 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:36:25.120385  354075 out.go:239] * 
	* 
	W0216 17:36:25.120450  354075 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:36:25.120485  354075 out.go:239] * 
	* 
	W0216 17:36:25.121556  354075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:36:25.123694  354075 out.go:177] X Problems detected in kubelet:
	I0216 17:36:25.125393  354075 out.go:177]   Feb 16 17:36:03 old-k8s-version-478853 kubelet[5716]: E0216 17:36:03.828321    5716 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:36:25.126998  354075 out.go:177]   Feb 16 17:36:04 old-k8s-version-478853 kubelet[5716]: E0216 17:36:04.833813    5716 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:36:25.128619  354075 out.go:177]   Feb 16 17:36:06 old-k8s-version-478853 kubelet[5716]: E0216 17:36:06.833227    5716 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:36:25.131383  354075 out.go:177] 
	W0216 17:36:25.132773  354075 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:36:25.132828  354075 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:36:25.132849  354075 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:36:25.134560  354075 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:28:05.697815646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f50c95616da1165e1cfabfc149c9e4fe9ac50a2d326751a0e496eb203737e29",
	            "SandboxKey": "/var/run/docker/netns/2f50c95616da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "ed2f7497eca108063ac7a9de2d606faa1c6b0a335757a0d57db71815983a1865",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 6 (328.490823ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:36:25.520734  438552 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-478853" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (504.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-478853 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-478853 create -f testdata/busybox.yaml: exit status 1 (58.322044ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-478853" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-478853 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:28:05.697815646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f50c95616da1165e1cfabfc149c9e4fe9ac50a2d326751a0e496eb203737e29",
	            "SandboxKey": "/var/run/docker/netns/2f50c95616da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "ed2f7497eca108063ac7a9de2d606faa1c6b0a335757a0d57db71815983a1865",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 6 (362.113794ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:36:25.968134  438672 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-478853" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:28:05.697815646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f50c95616da1165e1cfabfc149c9e4fe9ac50a2d326751a0e496eb203737e29",
	            "SandboxKey": "/var/run/docker/netns/2f50c95616da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "ed2f7497eca108063ac7a9de2d606faa1c6b0a335757a0d57db71815983a1865",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 6 (350.882266ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:36:26.339581  438772 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-478853" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-478853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-478853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m53.116311036s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-478853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-478853 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-478853 describe deploy/metrics-server -n kube-system: exit status 1 (46.696462ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-478853" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-478853 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:28:05.697815646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f50c95616da1165e1cfabfc149c9e4fe9ac50a2d326751a0e496eb203737e29",
	            "SandboxKey": "/var/run/docker/netns/2f50c95616da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "ed2f7497eca108063ac7a9de2d606faa1c6b0a335757a0d57db71815983a1865",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 6 (303.595476ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:38:19.828797  454671 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-478853" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (757.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0216 17:38:33.199480   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:38:37.749457   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:39:05.434938   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:39:13.710778   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:39:16.737511   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:39:26.469172   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 17:39:38.945135   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:39:55.059346   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 17:40:12.009562   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 17:40:26.829517   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:26.834853   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:26.845212   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:26.865490   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:26.906199   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:26.987274   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:27.147400   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:27.468026   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:28.108545   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:29.389051   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
E0216 17:40:31.949841   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (12m36.24154102s)

                                                
                                                
-- stdout --
	* [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-478853" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:38:21.303089  455078 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:38:21.303345  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303354  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:38:21.303359  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303563  455078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:38:21.304200  455078 out.go:298] Setting JSON to false
	I0216 17:38:21.305432  455078 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4848,"bootTime":1708100254,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:38:21.305506  455078 start.go:139] virtualization: kvm guest
	I0216 17:38:21.307760  455078 out.go:177] * [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:38:21.310010  455078 notify.go:220] Checking for updates...
	I0216 17:38:21.310012  455078 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:38:21.311432  455078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:38:21.312916  455078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:21.314294  455078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:38:21.315598  455078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:38:21.316976  455078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:38:21.318997  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:21.321025  455078 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0216 17:38:21.322407  455078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:38:21.345628  455078 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:38:21.345735  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.400126  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.390220676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.400280  455078 docker.go:295] overlay module found
	I0216 17:38:21.402314  455078 out.go:177] * Using the docker driver based on existing profile
	I0216 17:38:21.403808  455078 start.go:299] selected driver: docker
	I0216 17:38:21.403824  455078 start.go:903] validating driver "docker" against &{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.403921  455078 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:38:21.404778  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.460365  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.451261069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.460674  455078 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:38:21.460728  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:21.460750  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:21.460764  455078 start_flags.go:323] config:
	{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.464108  455078 out.go:177] * Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	I0216 17:38:21.465746  455078 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:38:21.467261  455078 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:38:21.468714  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:21.468746  455078 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:38:21.468770  455078 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 17:38:21.468818  455078 cache.go:56] Caching tarball of preloaded images
	I0216 17:38:21.468909  455078 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:38:21.468919  455078 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:38:21.469017  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.486258  455078 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:38:21.486284  455078 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:38:21.486302  455078 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:38:21.486342  455078 start.go:365] acquiring machines lock for old-k8s-version-478853: {Name:mkde5e52743909de9e75497b3ed0dd80f14fc0ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:38:21.486408  455078 start.go:369] acquired machines lock for "old-k8s-version-478853" in 40.03µs
	I0216 17:38:21.486432  455078 start.go:96] Skipping create...Using existing machine configuration
	I0216 17:38:21.486439  455078 fix.go:54] fixHost starting: 
	I0216 17:38:21.486680  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.504783  455078 fix.go:102] recreateIfNeeded on old-k8s-version-478853: state=Stopped err=<nil>
	W0216 17:38:21.504825  455078 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 17:38:21.506811  455078 out.go:177] * Restarting existing docker container for "old-k8s-version-478853" ...
	I0216 17:38:21.508568  455078 cli_runner.go:164] Run: docker start old-k8s-version-478853
	I0216 17:38:21.769480  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.789204  455078 kic.go:430] container "old-k8s-version-478853" state is running.
	I0216 17:38:21.789622  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:21.808063  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.808370  455078 machine.go:88] provisioning docker machine ...
	I0216 17:38:21.808408  455078 ubuntu.go:169] provisioning hostname "old-k8s-version-478853"
	I0216 17:38:21.808455  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:21.826185  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:21.826686  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:21.826710  455078 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-478853 && echo "old-k8s-version-478853" | sudo tee /etc/hostname
	I0216 17:38:21.827431  455078 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44460->127.0.0.1:33102: read: connection reset by peer
	I0216 17:38:24.971815  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-478853
	
	I0216 17:38:24.971897  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:24.989390  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:24.989714  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:24.989739  455078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-478853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-478853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-478853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:38:25.120712  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.120747  455078 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:38:25.120784  455078 ubuntu.go:177] setting up certificates
	I0216 17:38:25.120795  455078 provision.go:83] configureAuth start
	I0216 17:38:25.120844  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:25.140311  455078 provision.go:138] copyHostCerts
	I0216 17:38:25.140392  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:38:25.140404  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:38:25.140473  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:38:25.140575  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:38:25.140585  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:38:25.140611  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:38:25.140678  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:38:25.140685  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:38:25.140706  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:38:25.140759  455078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-478853 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-478853]
	I0216 17:38:25.293113  455078 provision.go:172] copyRemoteCerts
	I0216 17:38:25.293171  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:38:25.293215  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.311679  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:25.405147  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:38:25.429153  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:38:25.454627  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:38:25.477710  455078 provision.go:86] duration metric: configureAuth took 356.904526ms
	I0216 17:38:25.477736  455078 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:38:25.477903  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:25.477947  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.495763  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.496095  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.496108  455078 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:38:25.628939  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:38:25.628966  455078 ubuntu.go:71] root file system type: overlay
	I0216 17:38:25.629075  455078 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:38:25.629128  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.647033  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.647356  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.647419  455078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:38:25.796668  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:38:25.796764  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.815271  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.815583  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.815601  455078 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:38:25.957528  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.957560  455078 machine.go:91] provisioned docker machine in 4.149165092s
	I0216 17:38:25.957575  455078 start.go:300] post-start starting for "old-k8s-version-478853" (driver="docker")
	I0216 17:38:25.957589  455078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:38:25.957706  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:38:25.957761  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.976195  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.069365  455078 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:38:26.072831  455078 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:38:26.072871  455078 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:38:26.072884  455078 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:38:26.072893  455078 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:38:26.072906  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:38:26.072974  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:38:26.073063  455078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:38:26.073181  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:38:26.081215  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:26.103318  455078 start.go:303] post-start completed in 145.726596ms
	I0216 17:38:26.103402  455078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:38:26.103446  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.121271  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.213029  455078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:38:26.217252  455078 fix.go:56] fixHost completed within 4.730808663s
	I0216 17:38:26.217282  455078 start.go:83] releasing machines lock for "old-k8s-version-478853", held for 4.730859928s
	I0216 17:38:26.217359  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:26.236067  455078 ssh_runner.go:195] Run: cat /version.json
	I0216 17:38:26.236096  455078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:38:26.236126  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.236181  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.255208  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.256650  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.432006  455078 ssh_runner.go:195] Run: systemctl --version
	I0216 17:38:26.436397  455078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 17:38:26.440753  455078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 17:38:26.440819  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:38:26.449648  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:38:26.458023  455078 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 17:38:26.458059  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.458090  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.458223  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.474175  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:38:26.484094  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:38:26.493935  455078 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:38:26.494002  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:38:26.503403  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.512684  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:38:26.521909  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.531787  455078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:38:26.540705  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:38:26.550084  455078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:38:26.558059  455078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:38:26.565815  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:26.641416  455078 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:38:26.728849  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.728911  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.728990  455078 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:38:26.742735  455078 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:38:26.742813  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:38:26.759375  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.799127  455078 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:38:26.803185  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:38:26.812600  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:38:26.833140  455078 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:38:26.932984  455078 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:38:27.033484  455078 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:38:27.033629  455078 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:38:27.051185  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:27.130916  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:27.399678  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:27.425421  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:27.452311  455078 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:38:27.452430  455078 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:38:27.470021  455078 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 17:38:27.473738  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:27.498087  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:27.498175  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:27.517834  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:27.517864  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:27.517929  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:27.526852  455078 ssh_runner.go:195] Run: which lz4
	I0216 17:38:27.530297  455078 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:38:27.533688  455078 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:38:27.533725  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 17:38:28.338789  455078 docker.go:649] Took 0.808536 seconds to copy over tarball
	I0216 17:38:28.338870  455078 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:38:30.411788  455078 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072893303s)
	I0216 17:38:30.411815  455078 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:38:30.479175  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:30.487733  455078 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:38:30.505100  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:30.582595  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:33.116313  455078 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.533681626s)
	I0216 17:38:33.116382  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:33.135813  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:33.135845  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:33.135858  455078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:38:33.137162  455078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.137223  455078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.137354  455078 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.137392  455078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.137429  455078 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:38:33.137443  455078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138311  455078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138333  455078 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:38:33.138376  455078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.138385  455078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.138400  455078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.138433  455078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.285042  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:38:33.303267  455078 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 17:38:33.303312  455078 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:38:33.303348  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:38:33.315066  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.321725  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 17:38:33.323279  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.334757  455078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 17:38:33.334805  455078 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.334852  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.343699  455078 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 17:38:33.343747  455078 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.343793  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.352683  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.354280  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:38:33.362703  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:38:33.371065  455078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 17:38:33.371116  455078 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.371157  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.375587  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.376027  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.388362  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.393888  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:38:33.398036  455078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 17:38:33.398083  455078 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398130  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398631  455078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 17:38:33.398662  455078 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.398705  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.409280  455078 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 17:38:33.409328  455078 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.409390  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.417932  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:38:33.419058  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:38:33.429478  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 17:38:33.927751  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.946761  455078 cache_images.go:92] LoadImages completed in 810.887895ms
	W0216 17:38:33.946835  455078 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:38:33.946924  455078 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:38:33.998980  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:33.999011  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:33.999032  455078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:38:33.999057  455078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-478853 NodeName:old-k8s-version-478853 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:38:33.999219  455078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-478853"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-478853
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:38:33.999336  455078 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-478853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:38:33.999401  455078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:38:34.008330  455078 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:38:34.008396  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:38:34.017118  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:38:34.036229  455078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:38:34.052983  455078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:38:34.069858  455078 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:38:34.073399  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:34.084821  455078 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853 for IP: 192.168.76.2
	I0216 17:38:34.084858  455078 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.085003  455078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:38:34.085065  455078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:38:34.085164  455078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key
	I0216 17:38:34.085237  455078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25
	I0216 17:38:34.085304  455078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key
	I0216 17:38:34.085439  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:38:34.085482  455078 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:38:34.085498  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:38:34.085534  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:38:34.085568  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:38:34.085605  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:38:34.085675  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:34.086382  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:38:34.110629  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 17:38:34.134912  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:38:34.158975  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 17:38:34.182778  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:38:34.206586  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:38:34.230134  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:38:34.254430  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:38:34.277612  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:38:34.300924  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:38:34.323994  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:38:34.347005  455078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:38:34.363860  455078 ssh_runner.go:195] Run: openssl version
	I0216 17:38:34.369225  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:38:34.378947  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382670  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382744  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.389395  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:38:34.398260  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:38:34.407649  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411256  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411309  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.417851  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:38:34.426535  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:38:34.436025  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439431  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439491  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.445718  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:38:34.455048  455078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:38:34.458881  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 17:38:34.465622  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 17:38:34.472122  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 17:38:34.478657  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 17:38:34.485187  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 17:38:34.491630  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 17:38:34.498893  455078 kubeadm.go:404] StartCluster: {Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:34.499126  455078 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:34.518382  455078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:38:34.527854  455078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 17:38:34.527878  455078 kubeadm.go:636] restartCluster start
	I0216 17:38:34.527928  455078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 17:38:34.536194  455078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:34.537015  455078 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:34.537514  455078 kubeconfig.go:146] "old-k8s-version-478853" context is missing from /home/jenkins/minikube-integration/17936-6821/kubeconfig - will repair!
	I0216 17:38:34.538343  455078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.540022  455078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 17:38:34.548446  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:34.548492  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:34.558247  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.049347  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.049468  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.059915  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.549359  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.549453  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.559843  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:36.049307  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.049396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.059322  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:36.549105  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.549213  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.559873  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.049327  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.049438  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.060186  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.548692  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.548776  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.559318  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.048848  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.048932  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.059825  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.549312  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.549402  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.559567  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.049162  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.049259  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.060000  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.549306  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.549387  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.559839  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.049293  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.049368  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.059831  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.549417  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.549497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.559373  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:41.048862  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.048945  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.059288  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:41.549382  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.549484  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.559314  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.048976  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.049123  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.059008  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.548578  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.548667  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.558842  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.049308  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.049406  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.059857  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.549518  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.549600  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.559742  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.049320  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.049427  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.059859  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.548752  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.548839  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.560016  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.560053  455078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 17:38:44.560062  455078 kubeadm.go:1135] stopping kube-system containers ...
	I0216 17:38:44.560127  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:44.578770  455078 docker.go:483] Stopping containers: [075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8]
	I0216 17:38:44.578834  455078 ssh_runner.go:195] Run: docker stop 075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8
	I0216 17:38:44.596955  455078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 17:38:44.609545  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:38:44.618238  455078 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 16 17:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 16 17:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 16 17:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 16 17:32 /etc/kubernetes/scheduler.conf
	
	I0216 17:38:44.618338  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 17:38:44.626677  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 17:38:44.634782  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 17:38:44.643301  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 17:38:44.651439  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659643  455078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659668  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:44.715075  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.624969  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.844221  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.921661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:46.017075  455078 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:38:46.017183  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:46.517829  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.018038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.518055  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.018190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.517516  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.017903  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.517300  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.017289  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.517571  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:51.017570  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:51.517363  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.017595  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.517311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.017396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.517392  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.017334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.517678  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.017257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.517766  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:56.018102  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:56.517703  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.017370  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.518275  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.517273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.017508  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.517232  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.017311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.518159  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:01.017950  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:01.517978  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.017445  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.518044  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.017623  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.517519  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.018161  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.517338  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.018128  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.518224  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:06.017573  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:06.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.017566  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.518227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.017309  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.517919  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.017261  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.517958  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.018104  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.517630  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:11.017722  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:11.517385  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.518218  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.017548  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.517305  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.017745  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.517334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.018048  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.517744  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:16.018296  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:16.517970  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.017324  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.517497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.017541  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.517634  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.017283  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.518252  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.018182  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.517728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:21.017730  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:21.517816  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.017751  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.517782  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.018273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.517621  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.017984  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.517954  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.018276  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.517286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:26.017373  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:26.517418  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.017640  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.517287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.017677  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.017227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.517587  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.017969  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.518374  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:31.017306  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:31.517715  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.517510  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.018287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.517848  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.018088  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.518190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.017886  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.517921  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:36.017601  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:36.517708  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.017256  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.518107  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.018257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.517396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.018308  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.517977  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.017391  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.517676  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:41.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:41.517622  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.018155  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.517827  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.017315  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.518231  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.017682  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.518286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.017388  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.517539  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:46.017624  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:46.037272  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.037295  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:46.037341  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:46.055115  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.055155  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:46.055211  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:46.072423  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.072450  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:46.072507  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:46.090301  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.090332  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:46.090378  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:46.107880  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.107903  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:46.107956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:46.125772  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.125798  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:46.125854  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:46.144677  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.144701  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:46.144756  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:46.162329  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.162352  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:46.162364  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:46.162380  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:46.185113  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.090711    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.185260  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.091853    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.187251  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:25 old-k8s-version-478853 kubelet[1655]: E0216 17:39:25.090502    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.194562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.207697  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.211063  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.219621  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.220204  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.231233  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:46.231271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:46.254556  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:46.254587  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:39:46.318337  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:46.318446  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:46.318467  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:46.335929  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:46.335962  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:46.372855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.372884  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:46.372951  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:39:46.372966  455078 out.go:239]   Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.372982  455078 out.go:239]   Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.372999  455078 out.go:239]   Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.373011  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.373032  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.373043  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.373054  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:39:56.373478  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:56.383879  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:56.401408  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.401433  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:56.401477  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:56.418690  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.418712  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:56.418759  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:56.436337  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.436362  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:56.436415  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:56.455521  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.455553  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:56.455602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:56.473949  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.473981  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:56.474028  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:56.491473  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.491495  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:56.491541  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:56.509845  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.509869  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:56.509955  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:56.528197  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.528222  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:56.528231  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:56.528242  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:56.549520  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.551570  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.558562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:56.559087  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.571119  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.571305  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.579133  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.586015  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.590770  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:56.590803  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:56.615066  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:56.615101  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:39:56.678064  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:56.678096  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:56.678114  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:56.695201  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:56.695238  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:56.736311  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736338  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:56.736412  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:39:56.736433  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736451  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.736465  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.736474  455078 out.go:239]   Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736483  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.736496  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736508  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:06.738101  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:06.750726  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:06.772968  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.772995  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:06.773046  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:06.791480  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.791505  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:06.791551  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:06.815979  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.816012  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:06.816068  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:06.842123  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.842147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:06.842203  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:06.860609  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.860654  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:06.860709  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:06.879119  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.879147  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:06.879191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:06.898150  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.898182  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:06.898242  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:06.924427  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.924445  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:06.924454  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:06.924465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:06.953125  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.953295  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.960436  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.965576  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:06.972709  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.974757  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.985103  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.985250  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:06.988009  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:06.988029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:07.022943  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:07.023046  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:07.085083  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:07.085110  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:07.085127  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:07.106416  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:07.106465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:07.152094  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152117  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:07.152199  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:07.152209  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:07.152220  455078 out.go:239]   Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:07.152227  455078 out.go:239]   Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:07.152233  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:07.152240  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:07.152247  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152255  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:17.154126  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:17.166732  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:17.188369  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.188397  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:17.188456  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:17.208931  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.208958  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:17.209015  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:17.231036  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.231064  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:17.231117  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:17.251517  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.251544  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:17.251609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:17.273246  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.273278  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:17.273329  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:17.294078  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.294106  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:17.294162  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:17.315685  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.315708  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:17.315752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:17.339445  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.339468  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:17.339477  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:17.339488  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:17.373320  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:17.373357  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:17.450406  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:17.450427  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:17.450442  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:17.470514  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:17.470553  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:17.518001  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:17.518029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:17.548549  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.551801  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.566478  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.566729  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.584759  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.587832  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.593226  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595733  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595755  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:17.595804  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:17.595815  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.595822  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.595829  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.595838  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.595847  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595860  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:27.597408  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:27.608054  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:27.625950  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.625980  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:27.626038  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:27.643801  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.643825  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:27.643880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:27.661848  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.661878  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:27.661942  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:27.680910  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.680935  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:27.680984  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:27.698550  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.698575  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:27.698619  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:27.716355  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.716386  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:27.716449  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:27.739573  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.739621  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:27.739686  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:27.760360  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.760383  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:27.760395  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:27.760426  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:27.779114  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:27.779170  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:27.818659  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:27.818687  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:27.841156  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.841308  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.853903  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.855874  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.859522  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.864706  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.874176  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.879404  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:27.879429  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:27.903542  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:27.903580  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:27.964966  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:27.964993  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965008  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:27.965060  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:27.965077  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.965133  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.965146  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.965155  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.965165  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.965175  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965182  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:37.966560  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:37.977313  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:37.994775  455078 logs.go:276] 0 containers: []
	W0216 17:40:37.994798  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:37.994844  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:38.012932  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.012960  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:38.013014  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:38.033792  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.033820  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:38.033880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:38.052523  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.052549  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:38.052610  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:38.072650  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.072705  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:38.072765  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:38.092189  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.092223  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:38.092296  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:38.110333  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.110359  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:38.110404  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:38.128992  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.129027  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:38.129037  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:38.129048  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:38.167101  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:38.167135  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:38.186657  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.191871  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.201457  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.207565  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.209614  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.217808  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.224819  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:38.224859  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:38.248754  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:38.248833  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:38.311199  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:38.311223  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:38.311236  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:38.327036  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327063  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:38.327121  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:38.327132  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.327140  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.327148  455078 out.go:239]   Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.327154  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.327160  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.327169  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327174  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:48.327861  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:48.339194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:48.360648  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.360673  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:48.360728  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:48.378486  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.378513  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:48.378557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:48.398639  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.398666  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:48.398712  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:48.417793  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.417817  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:48.417873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:48.435529  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.435552  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:48.435602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:48.457049  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.457082  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:48.457155  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:48.477801  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.477826  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:48.477868  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:48.496234  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.496257  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:48.496265  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:48.496278  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:48.538184  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:48.538212  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:48.564633  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.566786  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.576446  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.585675  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.585865  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.588023  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.601821  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:48.601858  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:48.626705  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:48.626746  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:48.803956  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:48.803984  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:48.803997  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:48.820684  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820710  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:48.820755  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:48.820765  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.820790  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.820799  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.820807  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.820814  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.820820  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820826  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:58.821518  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:58.832683  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:58.850170  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.850200  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:58.850256  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:58.868305  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.868327  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:58.868367  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:58.887531  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.887556  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:58.887602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:58.905145  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.905176  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:58.905229  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:58.923499  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.923530  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:58.923587  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:58.941547  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.941581  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:58.941629  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:58.959233  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.959258  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:58.959309  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:58.977281  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.977302  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:58.977313  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:58.977323  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:59.015956  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:59.015983  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:59.040126  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.040302  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:59.042282  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.056437  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.062909  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.065045  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.065415  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.073540  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:59.073574  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:59.097435  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:59.097482  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:59.159801  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:59.159827  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:59.159839  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:59.176592  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176621  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:59.176676  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:40:59.176684  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176693  455078 out.go:239]   Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.176709  455078 out.go:239]   Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176718  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.176728  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.176735  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176740  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:09.178430  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:09.189176  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:09.207320  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.207345  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:09.207400  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:09.225002  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.225033  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:09.225096  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:09.243928  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.243959  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:09.244013  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:09.262481  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.262505  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:09.262559  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:09.279969  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.279992  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:09.280049  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:09.297754  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.297777  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:09.297825  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:09.315771  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.315800  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:09.315852  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:09.333460  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.333491  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:09.333500  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:09.333511  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:09.355521  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.362102  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.364251  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.364640  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.381046  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.388010  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.388320  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.390233  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.392031  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:09.392060  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:09.417243  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:09.417287  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:09.478675  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:09.478700  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:09.478711  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:09.495170  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:09.495201  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:09.534342  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534369  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:09.534418  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:41:09.534429  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.534440  455078 out.go:239]   Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.534451  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.534457  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.534469  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.534474  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534482  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:19.535038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:19.545504  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:19.563494  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.563519  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:19.563579  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:19.581616  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.581645  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:19.581692  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:19.599875  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.599906  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:19.599956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:19.618224  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.618251  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:19.618310  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:19.637362  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.637392  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:19.637442  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:19.655724  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.655755  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:19.655800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:19.672560  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.672588  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:19.672636  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:19.690212  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.690239  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:19.690251  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:19.690265  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:19.719464  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.726630  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.726900  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.728877  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.741983  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.745889  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.748644  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:19.748681  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:19.774437  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:19.774473  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:19.836688  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:19.836707  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:19.836719  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:19.852476  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:19.852506  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:19.889446  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889484  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:19.889541  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:41:19.889559  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.889574  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.889591  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.889607  455078 out.go:239]   Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.889625  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.889639  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889653  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:29.891027  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:29.901935  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:29.919667  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.919697  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:29.919757  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:29.937792  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.937823  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:29.937873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:29.955488  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.955513  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:29.955557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:29.973119  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.973147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:29.973194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:29.991607  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.991634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:29.991681  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:30.010229  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.010258  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:30.010330  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:30.029419  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.029446  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:30.029496  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:30.047844  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.047870  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:30.047882  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:30.047900  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:30.108010  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:30.108031  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:30.108042  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:30.124087  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:30.124121  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:30.161506  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:30.161532  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:30.182528  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.182822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.184822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.197489  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.201168  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.204811  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.208217  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.216614  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.220063  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.221825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:30.221850  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:30.245800  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245840  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:30.245897  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:41:30.245910  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.245938  455078 out.go:239]   Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.245947  455078 out.go:239]   Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.245955  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.245969  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.245977  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245986  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:40.247341  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:40.258231  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:40.277091  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.277115  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:40.277170  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:40.295536  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.295559  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:40.295604  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:40.312997  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.313026  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:40.313071  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:40.330525  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.330546  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:40.330589  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:40.348713  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.348742  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:40.348800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:40.366775  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.366797  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:40.366841  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:40.385643  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.385663  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:40.385707  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:40.403427  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.403450  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:40.403459  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:40.403470  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:40.439890  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:40.439928  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:40.462737  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.466398  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.470658  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.479019  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.482450  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.487453  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.489577  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.500740  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.501258  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:40.501276  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:40.525173  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:40.525207  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:40.587517  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:40.587539  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:40.587555  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:40.603528  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603556  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:40.603611  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:41:40.603623  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.603636  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.603652  455078 out.go:239]   Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.603661  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.603670  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.603681  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603689  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:50.604423  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:50.614773  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:50.632046  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.632072  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:50.632120  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:50.649668  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.649705  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:50.649752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:50.667298  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.667324  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:50.667369  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:50.684964  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.684985  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:50.685058  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:50.702294  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.702315  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:50.702372  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:50.719213  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.719242  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:50.719298  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:50.739288  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.739316  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:50.739379  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:50.758688  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.758711  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:50.758721  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:50.758733  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:50.778773  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.784194  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.786483  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.798383  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.801984  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.805643  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.807814  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.817121  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:50.817159  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:50.840704  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:50.840735  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:50.902600  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:50.902624  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:50.902661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:50.920132  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:50.920249  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:50.959025  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959061  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:50.959128  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:41:50.959146  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.959160  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.959176  455078 out.go:239]   Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.959188  455078 out.go:239]   Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.959198  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.959208  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959218  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:00.960497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:00.971191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:00.988983  455078 logs.go:276] 0 containers: []
	W0216 17:42:00.989007  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:00.989051  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:01.007472  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.007502  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:01.007549  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:01.027235  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.027266  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:01.027328  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:01.045396  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.045418  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:01.045466  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:01.063608  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.063634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:01.063676  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:01.081846  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.081875  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:01.081933  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:01.100572  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.100605  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:01.100656  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:01.118064  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.118093  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:01.118107  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:01.118120  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:01.178472  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:01.178494  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:01.178510  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:01.194152  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:01.194180  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:01.229057  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:01.229088  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:01.252846  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.256323  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.259747  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.261761  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.276222  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.278237  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.281914  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.283854  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.288825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:01.288847  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:01.312195  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312226  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:01.312273  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:42:01.312283  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.312293  455078 out.go:239]   Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.312302  455078 out.go:239]   Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.312314  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.312323  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.312330  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312336  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:11.313806  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:11.324599  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:11.342926  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.342950  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:11.343009  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:11.361832  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.361863  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:11.361913  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:11.380388  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.380413  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:11.380463  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:11.398531  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.398555  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:11.398609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:11.416599  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.416633  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:11.416691  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:11.437302  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.437329  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:11.437381  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:11.455500  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.455526  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:11.455588  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:11.473447  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.473472  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:11.473483  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:11.473499  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:11.489109  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:11.489137  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:11.528617  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:11.528657  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:11.554793  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.556844  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.560487  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.562461  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.577032  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.577534  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.584091  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.584767  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:11.584786  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:11.607897  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:11.607930  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:11.670359  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:11.670384  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670396  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:11.670447  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:42:11.670457  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.670467  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.670473  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.670480  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.670488  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.670494  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670502  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:21.671639  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:21.682566  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:21.700727  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.700751  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:21.700797  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:21.718547  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.718575  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:21.718638  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:21.738352  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.738376  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:21.738422  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:21.758981  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.759006  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:21.759060  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:21.779871  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.779920  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:21.779989  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:21.799706  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.799736  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:21.799787  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:21.817228  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.817255  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:21.817308  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:21.836951  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.836983  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:21.836997  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:21.837012  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:21.872431  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:21.872957  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:21.879827  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:21.881920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:21.893080  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:21.899070  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:21.899090  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:21.922375  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:21.922425  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:21.984024  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:21.984044  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:21.984056  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:22.000242  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:22.000273  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:22.038265  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038288  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:22.038331  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:42:22.038355  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:22.038363  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:22.038374  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:22.038390  455078 out.go:239]   Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:22.038401  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:22.038411  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038419  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:32.039537  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:32.050189  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:32.067646  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.067676  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:32.067745  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:32.087169  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.087213  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:32.087271  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:32.105465  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.105488  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:32.105546  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:32.123431  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.123464  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:32.123516  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:32.141039  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.141064  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:32.141122  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:32.159484  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.159515  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:32.159580  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:32.177162  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.177188  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:32.177241  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:32.194247  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.194275  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:32.194287  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:32.194305  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:32.253876  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:32.253898  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:32.253912  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:32.270178  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:32.270213  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:32.305859  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:32.305889  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:32.328308  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.330319  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.341106  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.347725  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.352568  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.354544  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.364250  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.364558  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:32.364575  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:32.389634  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389668  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:32.389721  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:42:32.389734  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.389744  455078 out.go:239]   Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.389754  455078 out.go:239]   Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.389762  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.389781  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.389791  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389801  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:42.390328  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:42.401227  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:42.419362  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.419393  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:42.419438  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:42.437451  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.437495  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:42.437554  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:42.455185  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.455206  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:42.455252  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:42.472418  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.472439  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:42.472493  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:42.489791  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.489818  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:42.489867  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:42.507633  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.507662  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:42.507716  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:42.526869  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.526889  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:42.526943  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:42.544969  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.544999  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:42.545011  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:42.545026  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:42.570906  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.575920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.577964  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.587726  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.592654  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.600845  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.602832  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.604949  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:42.604968  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:42.628966  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:42.629003  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:42.688286  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:42.688314  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:42.688331  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:42.704424  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:42.704453  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:42.742407  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742433  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:42.742493  455078 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:42:42.742501  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.742508  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.742517  455078 out.go:239]   Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.742552  455078 out.go:239]   Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.742559  455078 out.go:239]   Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.742565  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742570  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:52.743937  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:52.756372  455078 kubeadm.go:640] restartCluster took 4m18.22848465s
	W0216 17:42:52.756471  455078 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0216 17:42:52.756503  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:42:53.532102  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:42:53.543197  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:42:53.551917  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:42:53.552015  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:42:53.560427  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:42:53.560470  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:42:53.726076  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:42:53.785027  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:42:53.785263  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:42:53.865914  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:46:54.897764  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:46:54.897901  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:46:54.900889  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:54.900952  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:54.901057  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:54.901118  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:54.901164  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:54.901258  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:54.901344  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:54.901414  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:54.901483  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:54.901549  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:54.901599  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:54.901645  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:54.901736  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:54.901873  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:54.902013  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:54.902166  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:54.902269  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:54.902349  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:54.902439  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:54.905049  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:54.905136  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:54.905209  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:54.905290  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:54.905360  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:54.905435  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:54.905485  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:54.905549  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:54.905608  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:54.905668  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:54.905730  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:54.905789  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:54.905857  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:54.905899  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:54.905946  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:54.905996  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:54.906054  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:54.906113  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:54.908366  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:54.908451  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:54.908521  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:54.908576  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:54.908644  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:54.908802  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:46:54.908855  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:46:54.908861  455078 kubeadm.go:322] 
	I0216 17:46:54.908893  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:46:54.908926  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:46:54.908932  455078 kubeadm.go:322] 
	I0216 17:46:54.908967  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:46:54.908996  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:46:54.909083  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:46:54.909090  455078 kubeadm.go:322] 
	I0216 17:46:54.909170  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:46:54.909199  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:46:54.909225  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:46:54.909231  455078 kubeadm.go:322] 
	I0216 17:46:54.909312  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:46:54.909392  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:46:54.909464  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:46:54.909509  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:46:54.909573  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:46:54.909628  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0216 17:46:54.909766  455078 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:46:54.909815  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:46:55.653997  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:46:55.665110  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:46:55.665171  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:46:55.673735  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:46:55.673786  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:46:55.722375  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:55.722432  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:55.894761  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:55.894856  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:55.894909  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:55.894973  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:55.895037  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:55.895101  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:55.895159  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:55.895220  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:55.895285  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:55.895341  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:55.967714  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:55.967839  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:55.967958  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:56.138307  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:56.139389  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:56.146473  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:56.222590  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:56.225987  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:56.226094  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:56.226182  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:56.226277  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:56.226364  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:56.226459  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:56.226532  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:56.226620  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:56.226731  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:56.226833  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:56.226958  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:56.227020  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:56.227109  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:56.394947  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:56.547719  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:56.909016  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:57.118906  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:57.119703  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:57.121695  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:57.121837  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:57.126402  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:57.127880  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:57.128910  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:57.132135  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:47:37.132515  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:50:57.133720  455078 kubeadm.go:322] 
	I0216 17:50:57.133814  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:50:57.133878  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:50:57.133889  455078 kubeadm.go:322] 
	I0216 17:50:57.133928  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:50:57.133973  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:50:57.134138  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:50:57.134168  455078 kubeadm.go:322] 
	I0216 17:50:57.134317  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:50:57.134386  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:50:57.134454  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:50:57.134477  455078 kubeadm.go:322] 
	I0216 17:50:57.134600  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:50:57.134682  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:50:57.134772  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:50:57.134854  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:50:57.134948  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:50:57.134989  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:50:57.136987  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:50:57.137100  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:50:57.137301  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:50:57.137405  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:50:57.137479  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:50:57.137562  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:50:57.137603  455078 kubeadm.go:406] StartCluster complete in 12m22.638718493s
	I0216 17:50:57.137690  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:50:57.155966  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.155994  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:50:57.156042  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:50:57.173312  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.173339  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:50:57.173395  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:50:57.190861  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.190885  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:50:57.190939  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:50:57.208223  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.208245  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:50:57.208292  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:50:57.224808  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.224835  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:50:57.224887  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:50:57.242004  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.242026  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:50:57.242066  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:50:57.258500  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.258522  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:50:57.258562  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:50:57.275390  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.275415  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:50:57.275427  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:50:57.275443  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:50:57.336885  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:50:57.336911  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:50:57.336929  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:50:57.354268  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:50:57.354298  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:50:57.388996  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:50:57.389022  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:50:57.410914  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.418232  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.420274  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.423841  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:43 old-k8s-version-478853 kubelet[11238]: E0216 17:50:43.867344   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:50:57.433982  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:49 old-k8s-version-478853 kubelet[11238]: E0216 17:50:49.865840   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.437556  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:51 old-k8s-version-478853 kubelet[11238]: E0216 17:50:51.865653   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.446171  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.867671   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.446448  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.868767   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:50:57.447246  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:50:57.447271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 17:50:57.472300  455078 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:50:57.472350  455078 out.go:239] * 
	* 
	W0216 17:50:57.472421  455078 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.472446  455078 out.go:239] * 
	* 
	W0216 17:50:57.473265  455078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:50:57.475359  455078 out.go:177] X Problems detected in kubelet:
	I0216 17:50:57.477187  455078 out.go:177]   Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:50:57.478538  455078 out.go:177]   Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:50:57.479997  455078 out.go:177]   Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:50:57.482565  455078 out.go:177] 
	W0216 17:50:57.483906  455078 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.483958  455078 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:50:57.483983  455078 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:50:57.485600  455078 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-478853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455353,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:38:21.761755666Z",
	            "FinishedAt": "2024-02-16T17:38:20.210098294Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "199c16a2a4e5610e66ab3ac8041b86ba652305b9a0affd9b2a79a513df594615",
	            "SandboxKey": "/var/run/docker/netns/199c16a2a4e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "cb8d24629aaf63d27bbb12983ffcbd66ccc33e142bfce98dd2d283368110e8a2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 2 (280.659595ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-478853 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p no-preload-408847                                   | no-preload-408847            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	| delete  | -p no-preload-408847                                   | no-preload-408847            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	| start   | -p newest-cni-398474 --memory=2200 --alsologtostderr   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-398474             | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-398474                  | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-398474 --memory=2200 --alsologtostderr   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-398474 image list                           | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	| delete  | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	| stop    | -p old-k8s-version-478853                              | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-478853             | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-478853                              | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-162802 image list                          | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	| delete  | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	| image   | default-k8s-diff-port-816748                           | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 17:38:21
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 17:38:21.303089  455078 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:38:21.303345  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303354  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:38:21.303359  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303563  455078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:38:21.304200  455078 out.go:298] Setting JSON to false
	I0216 17:38:21.305432  455078 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4848,"bootTime":1708100254,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:38:21.305506  455078 start.go:139] virtualization: kvm guest
	I0216 17:38:21.307760  455078 out.go:177] * [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:38:21.310010  455078 notify.go:220] Checking for updates...
	I0216 17:38:21.310012  455078 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:38:21.311432  455078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:38:21.312916  455078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:21.314294  455078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:38:21.315598  455078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:38:21.316976  455078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:38:21.318997  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:21.321025  455078 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0216 17:38:21.322407  455078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:38:21.345628  455078 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:38:21.345735  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.400126  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.390220676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.400280  455078 docker.go:295] overlay module found
	I0216 17:38:21.402314  455078 out.go:177] * Using the docker driver based on existing profile
	I0216 17:38:21.403808  455078 start.go:299] selected driver: docker
	I0216 17:38:21.403824  455078 start.go:903] validating driver "docker" against &{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.403921  455078 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:38:21.404778  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.460365  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.451261069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.460674  455078 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:38:21.460728  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:21.460750  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:21.460764  455078 start_flags.go:323] config:
	{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.464108  455078 out.go:177] * Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	I0216 17:38:21.465746  455078 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:38:21.467261  455078 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:38:21.468714  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:21.468746  455078 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:38:21.468770  455078 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 17:38:21.468818  455078 cache.go:56] Caching tarball of preloaded images
	I0216 17:38:21.468909  455078 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:38:21.468919  455078 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:38:21.469017  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.486258  455078 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:38:21.486284  455078 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:38:21.486302  455078 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:38:21.486342  455078 start.go:365] acquiring machines lock for old-k8s-version-478853: {Name:mkde5e52743909de9e75497b3ed0dd80f14fc0ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:38:21.486408  455078 start.go:369] acquired machines lock for "old-k8s-version-478853" in 40.03µs
	I0216 17:38:21.486432  455078 start.go:96] Skipping create...Using existing machine configuration
	I0216 17:38:21.486439  455078 fix.go:54] fixHost starting: 
	I0216 17:38:21.486680  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.504783  455078 fix.go:102] recreateIfNeeded on old-k8s-version-478853: state=Stopped err=<nil>
	W0216 17:38:21.504825  455078 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 17:38:21.506811  455078 out.go:177] * Restarting existing docker container for "old-k8s-version-478853" ...
	I0216 17:38:18.761435  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.246854  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.140505  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:23.640145  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:25.640932  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.508568  455078 cli_runner.go:164] Run: docker start old-k8s-version-478853
	I0216 17:38:21.769480  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.789204  455078 kic.go:430] container "old-k8s-version-478853" state is running.
	I0216 17:38:21.789622  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:21.808063  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.808370  455078 machine.go:88] provisioning docker machine ...
	I0216 17:38:21.808408  455078 ubuntu.go:169] provisioning hostname "old-k8s-version-478853"
	I0216 17:38:21.808455  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:21.826185  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:21.826686  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:21.826710  455078 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-478853 && echo "old-k8s-version-478853" | sudo tee /etc/hostname
	I0216 17:38:21.827431  455078 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44460->127.0.0.1:33102: read: connection reset by peer
	I0216 17:38:24.971815  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-478853
	
	I0216 17:38:24.971897  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:24.989390  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:24.989714  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:24.989739  455078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-478853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-478853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-478853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:38:25.120712  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.120747  455078 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:38:25.120784  455078 ubuntu.go:177] setting up certificates
	I0216 17:38:25.120795  455078 provision.go:83] configureAuth start
	I0216 17:38:25.120844  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:25.140311  455078 provision.go:138] copyHostCerts
	I0216 17:38:25.140392  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:38:25.140404  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:38:25.140473  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:38:25.140575  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:38:25.140585  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:38:25.140611  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:38:25.140678  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:38:25.140685  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:38:25.140706  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:38:25.140759  455078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-478853 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-478853]
	I0216 17:38:25.293113  455078 provision.go:172] copyRemoteCerts
	I0216 17:38:25.293171  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:38:25.293215  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.311679  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:25.405147  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:38:25.429153  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:38:25.454627  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:38:25.477710  455078 provision.go:86] duration metric: configureAuth took 356.904526ms
	I0216 17:38:25.477736  455078 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:38:25.477903  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:25.477947  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.495763  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.496095  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.496108  455078 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:38:25.628939  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:38:25.628966  455078 ubuntu.go:71] root file system type: overlay
	I0216 17:38:25.629075  455078 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:38:25.629128  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.647033  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.647356  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.647419  455078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:38:25.796668  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:38:25.796764  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.815271  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.815583  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.815601  455078 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:38:25.957528  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.957560  455078 machine.go:91] provisioned docker machine in 4.149165092s
	I0216 17:38:25.957575  455078 start.go:300] post-start starting for "old-k8s-version-478853" (driver="docker")
	I0216 17:38:25.957589  455078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:38:25.957706  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:38:25.957761  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.976195  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.069365  455078 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:38:26.072831  455078 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:38:26.072871  455078 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:38:26.072884  455078 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:38:26.072893  455078 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:38:26.072906  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:38:26.072974  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:38:26.073063  455078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:38:26.073181  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:38:26.081215  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:26.103318  455078 start.go:303] post-start completed in 145.726596ms
	I0216 17:38:26.103402  455078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:38:26.103446  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.121271  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.213029  455078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:38:26.217252  455078 fix.go:56] fixHost completed within 4.730808663s
	I0216 17:38:26.217282  455078 start.go:83] releasing machines lock for "old-k8s-version-478853", held for 4.730859928s
	I0216 17:38:26.217359  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:26.236067  455078 ssh_runner.go:195] Run: cat /version.json
	I0216 17:38:26.236096  455078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:38:26.236126  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.236181  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.255208  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.256650  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.432006  455078 ssh_runner.go:195] Run: systemctl --version
	I0216 17:38:26.436397  455078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 17:38:26.440753  455078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 17:38:26.440819  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:38:26.449648  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:38:26.458023  455078 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 17:38:26.458059  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.458090  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.458223  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.474175  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:38:26.484094  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:38:26.493935  455078 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:38:26.494002  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:38:26.503403  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.512684  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:38:26.521909  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.531787  455078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:38:26.540705  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:38:26.550084  455078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:38:26.558059  455078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:38:26.565815  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:26.641416  455078 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:38:26.728849  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.728911  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.728990  455078 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:38:26.742735  455078 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:38:26.742813  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:38:26.759375  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.799127  455078 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:38:26.803185  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:38:26.812600  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:38:26.833140  455078 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:38:26.932984  455078 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:38:27.033484  455078 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:38:27.033629  455078 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:38:27.051185  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:27.130916  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:27.399678  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:27.425421  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:23.747120  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:25.747228  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:27.749489  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:28.141295  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:30.640768  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:27.452311  455078 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:38:27.452430  455078 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:38:27.470021  455078 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 17:38:27.473738  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:27.498087  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:27.498175  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:27.517834  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:27.517864  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:27.517929  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:27.526852  455078 ssh_runner.go:195] Run: which lz4
	I0216 17:38:27.530297  455078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0216 17:38:27.533688  455078 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:38:27.533725  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 17:38:28.338789  455078 docker.go:649] Took 0.808536 seconds to copy over tarball
	I0216 17:38:28.338870  455078 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:38:30.411788  455078 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072893303s)
	I0216 17:38:30.411815  455078 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:38:30.479175  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:30.487733  455078 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:38:30.505100  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:30.582595  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:30.247143  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:32.747816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:33.141181  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:35.639892  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:33.116313  455078 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.533681626s)
	I0216 17:38:33.116382  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:33.135813  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:33.135845  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:33.135858  455078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:38:33.137162  455078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.137223  455078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.137354  455078 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.137392  455078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.137429  455078 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:38:33.137443  455078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138311  455078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138333  455078 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:38:33.138376  455078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.138385  455078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.138400  455078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.138433  455078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.285042  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:38:33.303267  455078 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 17:38:33.303312  455078 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:38:33.303348  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:38:33.315066  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.321725  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 17:38:33.323279  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.334757  455078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 17:38:33.334805  455078 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.334852  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.343699  455078 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 17:38:33.343747  455078 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.343793  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.352683  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.354280  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:38:33.362703  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:38:33.371065  455078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 17:38:33.371116  455078 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.371157  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.375587  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.376027  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.388362  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.393888  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:38:33.398036  455078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 17:38:33.398083  455078 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398130  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398631  455078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 17:38:33.398662  455078 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.398705  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.409280  455078 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 17:38:33.409328  455078 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.409390  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.417932  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:38:33.419058  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:38:33.429478  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 17:38:33.927751  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.946761  455078 cache_images.go:92] LoadImages completed in 810.887895ms
	W0216 17:38:33.946835  455078 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:38:33.946924  455078 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:38:33.998980  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:33.999011  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:33.999032  455078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:38:33.999057  455078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-478853 NodeName:old-k8s-version-478853 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:38:33.999219  455078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-478853"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-478853
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:38:33.999336  455078 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-478853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:38:33.999401  455078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:38:34.008330  455078 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:38:34.008396  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:38:34.017118  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:38:34.036229  455078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:38:34.052983  455078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:38:34.069858  455078 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:38:34.073399  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:34.084821  455078 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853 for IP: 192.168.76.2
	I0216 17:38:34.084858  455078 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.085003  455078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:38:34.085065  455078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:38:34.085164  455078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key
	I0216 17:38:34.085237  455078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25
	I0216 17:38:34.085304  455078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key
	I0216 17:38:34.085439  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:38:34.085482  455078 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:38:34.085498  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:38:34.085534  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:38:34.085568  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:38:34.085605  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:38:34.085675  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:34.086382  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:38:34.110629  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 17:38:34.134912  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:38:34.158975  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 17:38:34.182778  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:38:34.206586  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:38:34.230134  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:38:34.254430  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:38:34.277612  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:38:34.300924  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:38:34.323994  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:38:34.347005  455078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:38:34.363860  455078 ssh_runner.go:195] Run: openssl version
	I0216 17:38:34.369225  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:38:34.378947  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382670  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382744  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.389395  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:38:34.398260  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:38:34.407649  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411256  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411309  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.417851  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:38:34.426535  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:38:34.436025  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439431  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439491  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.445718  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:38:34.455048  455078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:38:34.458881  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 17:38:34.465622  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 17:38:34.472122  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 17:38:34.478657  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 17:38:34.485187  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 17:38:34.491630  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 17:38:34.498893  455078 kubeadm.go:404] StartCluster: {Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:34.499126  455078 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:34.518382  455078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:38:34.527854  455078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 17:38:34.527878  455078 kubeadm.go:636] restartCluster start
	I0216 17:38:34.527928  455078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 17:38:34.536194  455078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:34.537015  455078 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:34.537514  455078 kubeconfig.go:146] "old-k8s-version-478853" context is missing from /home/jenkins/minikube-integration/17936-6821/kubeconfig - will repair!
	I0216 17:38:34.538343  455078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.540022  455078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 17:38:34.548446  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:34.548492  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:34.558247  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.049347  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.049468  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.059915  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.549359  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.549453  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.559843  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:36.049307  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.049396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.059322  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.246221  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:37.246568  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:37.641066  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:40.140454  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:36.549105  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.549213  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.559873  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.049327  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.049438  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.060186  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.548692  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.548776  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.559318  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.048848  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.048932  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.059825  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.549312  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.549402  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.559567  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.049162  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.049259  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.060000  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.549306  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.549387  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.559839  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.049293  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.049368  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.059831  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.549417  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.549497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.559373  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:41.048862  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.048945  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.059288  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.247561  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:41.748493  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:42.140801  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:44.640033  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:41.549382  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.549484  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.559314  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.048976  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.049123  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.059008  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.548578  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.548667  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.558842  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.049308  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.049406  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.059857  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.549518  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.549600  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.559742  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.049320  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.049427  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.059859  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.548752  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.548839  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.560016  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.560053  455078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 17:38:44.560062  455078 kubeadm.go:1135] stopping kube-system containers ...
	I0216 17:38:44.560127  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:44.578770  455078 docker.go:483] Stopping containers: [075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8]
	I0216 17:38:44.578834  455078 ssh_runner.go:195] Run: docker stop 075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8
	I0216 17:38:44.596955  455078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 17:38:44.609545  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:38:44.618238  455078 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 16 17:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 16 17:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 16 17:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 16 17:32 /etc/kubernetes/scheduler.conf
	
	I0216 17:38:44.618338  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 17:38:44.626677  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 17:38:44.634782  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 17:38:44.643301  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 17:38:44.651439  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659643  455078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659668  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:44.715075  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.624969  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.844221  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.921661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:46.017075  455078 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:38:46.017183  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:44.246867  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:46.247223  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:47.140687  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:49.640734  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:46.517829  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.018038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.518055  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.018190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.517516  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.017903  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.517300  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.017289  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.517571  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:51.017570  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.247348  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:50.747444  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:52.750448  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:52.140329  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:54.641789  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:51.517363  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.017595  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.517311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.017396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.517392  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.017334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.517678  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.017257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.517766  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:56.018102  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.247481  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:57.747095  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:57.140707  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:59.640217  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:56.517703  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.017370  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.518275  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.517273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.017508  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.517232  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.017311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.518159  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:01.017950  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.747967  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:02.246918  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:01.640535  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:04.140454  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:01.517978  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.017445  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.518044  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.017623  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.517519  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.018161  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.517338  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.018128  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.518224  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:06.017573  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.747285  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.748002  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.140588  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:08.640075  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:10.640834  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.017566  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.518227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.017309  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.517919  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.017261  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.517958  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.018104  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.517630  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:11.017722  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.246644  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:11.247325  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:13.140690  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:15.639645  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:11.517385  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.518218  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.017548  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.517305  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.017745  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.517334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.018048  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.517744  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:16.018296  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.747391  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:15.747767  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:17.747895  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:17.640336  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:19.641039  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:16.517970  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.017324  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.517497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.017541  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.517634  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.017283  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.518252  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.018182  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.517728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:21.017730  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.749099  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:22.247204  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:22.140431  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:24.140986  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:21.517816  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.017751  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.517782  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.018273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.517621  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.017984  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.517954  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.018276  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.517286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:26.017373  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.747551  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.747774  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.639947  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:28.640616  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:30.640740  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.517418  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.017640  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.517287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.017677  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.017227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.517587  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.017969  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.518374  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:31.017306  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.246627  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:31.747429  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:33.140469  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:35.640295  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:31.517715  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.517510  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.018287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.517848  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.018088  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.518190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.017886  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.517921  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:36.017601  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.748340  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:36.246559  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:38.141091  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:40.642937  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:36.517708  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.017256  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.518107  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.018257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.517396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.018308  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.517977  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.017391  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.517676  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:41.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.741460  421205 pod_ready.go:81] duration metric: took 4m0.000603771s waiting for pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace to be "Ready" ...
	E0216 17:39:38.741515  421205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0216 17:39:38.741533  421205 pod_ready.go:38] duration metric: took 4m12.045748032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:39:38.741559  421205 kubeadm.go:640] restartCluster took 4m28.365798554s
	W0216 17:39:38.741619  421205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0216 17:39:38.741647  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0216 17:39:43.140804  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:45.640700  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:45.437451  421205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.695785181s)
	I0216 17:39:45.437509  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:39:45.449061  421205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:39:45.457885  421205 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:39:45.457936  421205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:39:45.466012  421205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:39:45.466056  421205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:39:45.508738  421205 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 17:39:45.508791  421205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:39:45.558205  421205 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:39:45.558302  421205 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:39:45.558347  421205 kubeadm.go:322] OS: Linux
	I0216 17:39:45.558428  421205 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:39:45.558485  421205 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:39:45.558553  421205 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:39:45.558668  421205 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:39:45.558732  421205 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:39:45.558772  421205 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:39:45.558807  421205 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0216 17:39:45.558847  421205 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0216 17:39:45.558884  421205 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0216 17:39:45.627418  421205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:39:45.627548  421205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:39:45.627688  421205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:39:45.912474  421205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:39:41.517622  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.018155  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.517827  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.017315  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.518231  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.017682  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.518286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.017388  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.517539  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:46.017624  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:46.037272  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.037295  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:46.037341  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:46.055115  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.055155  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:46.055211  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:46.072423  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.072450  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:46.072507  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:46.090301  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.090332  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:46.090378  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:46.107880  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.107903  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:46.107956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:46.125772  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.125798  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:46.125854  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:46.144677  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.144701  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:46.144756  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:46.162329  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.162352  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:46.162364  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:46.162380  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:46.185113  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.090711    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.185260  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.091853    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.187251  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:25 old-k8s-version-478853 kubelet[1655]: E0216 17:39:25.090502    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.194562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.207697  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.211063  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.219621  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.220204  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.231233  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:46.231271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:46.254556  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:46.254587  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:39:45.914573  421205 out.go:204]   - Generating certificates and keys ...
	I0216 17:39:45.914675  421205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:39:45.914799  421205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:39:45.914914  421205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:39:45.915001  421205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:39:45.915089  421205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:39:45.915541  421205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:39:45.916033  421205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:39:45.916419  421205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:39:45.916848  421205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:39:45.917282  421205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:39:45.917754  421205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:39:45.917840  421205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:39:46.148582  421205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:39:46.292877  421205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:39:46.367973  421205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:39:46.626595  421205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:39:46.627016  421205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:39:46.629773  421205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:39:46.631711  421205 out.go:204]   - Booting up control plane ...
	I0216 17:39:46.631800  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:39:46.631863  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:39:46.632578  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:39:46.646321  421205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:39:46.647004  421205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:39:46.647046  421205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 17:39:46.742531  421205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:39:48.140674  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:50.141346  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	W0216 17:39:46.318337  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:46.318446  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:46.318467  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:46.335929  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:46.335962  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:46.372855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.372884  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:46.372951  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:39:46.372966  455078 out.go:239]   Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.372982  455078 out.go:239]   Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.372999  455078 out.go:239]   Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.373011  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.373032  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.373043  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.373054  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:39:52.244564  421205 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502426 seconds
	I0216 17:39:52.244744  421205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 17:39:52.257745  421205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 17:39:52.780917  421205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 17:39:52.781167  421205 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-816748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 17:39:53.290300  421205 kubeadm.go:322] [bootstrap-token] Using token: b545ud.qoxywc1rux2naq15
	I0216 17:39:53.291755  421205 out.go:204]   - Configuring RBAC rules ...
	I0216 17:39:53.291900  421205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 17:39:53.296340  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 17:39:53.305516  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 17:39:53.308824  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 17:39:53.311990  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 17:39:53.315096  421205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 17:39:53.326643  421205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 17:39:53.516995  421205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 17:39:53.702313  421205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 17:39:53.703508  421205 kubeadm.go:322] 
	I0216 17:39:53.703621  421205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 17:39:53.703643  421205 kubeadm.go:322] 
	I0216 17:39:53.703738  421205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 17:39:53.703749  421205 kubeadm.go:322] 
	I0216 17:39:53.703791  421205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 17:39:53.703859  421205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 17:39:53.703917  421205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 17:39:53.703923  421205 kubeadm.go:322] 
	I0216 17:39:53.703990  421205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 17:39:53.703997  421205 kubeadm.go:322] 
	I0216 17:39:53.704048  421205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 17:39:53.704054  421205 kubeadm.go:322] 
	I0216 17:39:53.704115  421205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 17:39:53.704243  421205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 17:39:53.704316  421205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 17:39:53.704324  421205 kubeadm.go:322] 
	I0216 17:39:53.704429  421205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 17:39:53.704536  421205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 17:39:53.704543  421205 kubeadm.go:322] 
	I0216 17:39:53.704641  421205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token b545ud.qoxywc1rux2naq15 \
	I0216 17:39:53.704736  421205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c33b1f5c4481e3865d2c10e6d2d19afe2a2ea581c4fb2eeaf81b4cbf188a97ed \
	I0216 17:39:53.704769  421205 kubeadm.go:322] 	--control-plane 
	I0216 17:39:53.704776  421205 kubeadm.go:322] 
	I0216 17:39:53.704878  421205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 17:39:53.704885  421205 kubeadm.go:322] 
	I0216 17:39:53.704982  421205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token b545ud.qoxywc1rux2naq15 \
	I0216 17:39:53.705100  421205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c33b1f5c4481e3865d2c10e6d2d19afe2a2ea581c4fb2eeaf81b4cbf188a97ed 
	I0216 17:39:53.708918  421205 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:39:53.709126  421205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:39:53.709148  421205 cni.go:84] Creating CNI manager for ""
	I0216 17:39:53.709168  421205 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:39:53.711998  421205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 17:39:52.640913  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:55.140750  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:53.714013  421205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 17:39:53.727031  421205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 17:39:53.811313  421205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 17:39:53.811367  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:53.811413  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=default-k8s-diff-port-816748 minikube.k8s.io/updated_at=2024_02_16T17_39_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:54.021083  421205 ops.go:34] apiserver oom_adj: -16
	I0216 17:39:54.021156  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:54.521783  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:55.022023  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:55.521421  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:56.021555  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:56.521524  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.021852  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.521744  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.640415  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:00.139644  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:56.373478  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:56.383879  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:56.401408  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.401433  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:56.401477  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:56.418690  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.418712  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:56.418759  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:56.436337  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.436362  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:56.436415  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:56.455521  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.455553  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:56.455602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:56.473949  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.473981  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:56.474028  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:56.491473  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.491495  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:56.491541  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:56.509845  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.509869  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:56.509955  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:56.528197  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.528222  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:56.528231  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:56.528242  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:56.549520  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.551570  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.558562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:56.559087  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.571119  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.571305  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.579133  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.586015  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.590770  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:56.590803  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:56.615066  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:56.615101  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:39:56.678064  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:56.678096  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:56.678114  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:56.695201  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:56.695238  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:56.736311  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736338  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:56.736412  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:39:56.736433  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736451  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.736465  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.736474  455078 out.go:239]   Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736483  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.736496  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736508  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:39:58.021227  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:58.521363  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:59.021155  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:59.521559  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:00.021409  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:00.521925  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:01.022133  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:01.522131  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.021930  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.521763  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.140368  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:04.639630  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:03.022096  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:03.521373  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:04.021412  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:04.521179  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:05.021348  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:05.521512  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.021569  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.521578  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.612467  421205 kubeadm.go:1088] duration metric: took 12.801150825s to wait for elevateKubeSystemPrivileges.
	I0216 17:40:06.612503  421205 kubeadm.go:406] StartCluster complete in 4m56.263224158s
	I0216 17:40:06.612526  421205 settings.go:142] acquiring lock: {Name:mkc0445e63ab2bfc5d2d7306f3af19ca96df275c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:40:06.612605  421205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:40:06.614600  421205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:40:06.616255  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 17:40:06.616305  421205 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 17:40:06.616387  421205 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616409  421205 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616417  421205 addons.go:243] addon storage-provisioner should already be in state true
	I0216 17:40:06.616458  421205 config.go:182] Loaded profile config "default-k8s-diff-port-816748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:40:06.616470  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.616511  421205 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616527  421205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-816748"
	I0216 17:40:06.616614  421205 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616633  421205 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616642  421205 addons.go:243] addon metrics-server should already be in state true
	I0216 17:40:06.616678  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.616835  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.616951  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.616959  421205 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616973  421205 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616980  421205 addons.go:243] addon dashboard should already be in state true
	I0216 17:40:06.617018  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.617107  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.617436  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.648433  421205 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 17:40:06.650072  421205 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 17:40:06.652286  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 17:40:06.652308  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 17:40:06.653725  421205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0216 17:40:06.652367  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.654228  421205 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-816748"
	I0216 17:40:06.655391  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 17:40:06.656747  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 17:40:06.656777  421205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:40:06.656793  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	W0216 17:40:06.656808  421205 addons.go:243] addon default-storageclass should already be in state true
	I0216 17:40:06.658385  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.658341  421205 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:40:06.658473  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 17:40:06.658518  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.658765  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.674555  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.677611  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.679326  421205 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 17:40:06.679343  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 17:40:06.679382  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.681559  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.703643  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.913413  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 17:40:06.915276  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 17:40:06.915298  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 17:40:06.922563  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 17:40:06.926729  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 17:40:06.926756  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 17:40:06.995331  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:40:07.005872  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 17:40:07.005905  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 17:40:07.103003  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 17:40:07.103037  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 17:40:07.110492  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 17:40:07.110518  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 17:40:07.120377  421205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-816748" context rescaled to 1 replicas
	I0216 17:40:07.120485  421205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:40:07.122904  421205 out.go:177] * Verifying Kubernetes components...
	I0216 17:40:07.124464  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:40:07.213518  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 17:40:07.213549  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 17:40:07.295281  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 17:40:07.409983  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 17:40:07.410082  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 17:40:07.599285  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 17:40:07.599372  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 17:40:07.706049  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 17:40:07.706088  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 17:40:07.794066  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 17:40:07.794105  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 17:40:07.822000  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 17:40:07.822081  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 17:40:07.911598  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 17:40:07.911625  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 17:40:07.992925  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 17:40:08.711726  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.798252087s)
	I0216 17:40:08.994620  421205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.072013054s)
	I0216 17:40:08.994687  421205 start.go:929] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0216 17:40:09.404133  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.408758523s)
	I0216 17:40:09.404258  421205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.279733497s)
	I0216 17:40:09.404326  421205 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-816748" to be "Ready" ...
	I0216 17:40:09.410294  421205 node_ready.go:49] node "default-k8s-diff-port-816748" has status "Ready":"True"
	I0216 17:40:09.410317  421205 node_ready.go:38] duration metric: took 5.951342ms waiting for node "default-k8s-diff-port-816748" to be "Ready" ...
	I0216 17:40:09.410329  421205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:40:09.416584  421205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:09.531400  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.236063439s)
	I0216 17:40:09.531444  421205 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-816748"
	I0216 17:40:10.207461  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.21448694s)
	I0216 17:40:10.208862  421205 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-816748 addons enable metrics-server
	
	I0216 17:40:10.210493  421205 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0216 17:40:06.646176  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:09.140721  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:06.738101  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:06.750726  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:06.772968  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.772995  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:06.773046  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:06.791480  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.791505  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:06.791551  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:06.815979  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.816012  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:06.816068  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:06.842123  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.842147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:06.842203  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:06.860609  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.860654  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:06.860709  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:06.879119  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.879147  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:06.879191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:06.898150  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.898182  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:06.898242  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:06.924427  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.924445  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:06.924454  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:06.924465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:06.953125  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.953295  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.960436  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.965576  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:06.972709  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.974757  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.985103  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.985250  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:06.988009  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:06.988029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:07.022943  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:07.023046  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:07.085083  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:07.085110  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:07.085127  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:07.106416  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:07.106465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:07.152094  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152117  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:07.152199  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:07.152209  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:07.152220  455078 out.go:239]   Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:07.152227  455078 out.go:239]   Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:07.152233  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:07.152240  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:07.152247  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152255  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:10.212229  421205 addons.go:505] enable addons completed in 3.595922671s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0216 17:40:10.423944  421205 pod_ready.go:92] pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.423988  421205 pod_ready.go:81] duration metric: took 1.007376782s waiting for pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.424003  421205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.429495  421205 pod_ready.go:92] pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.429524  421205 pod_ready.go:81] duration metric: took 5.513071ms waiting for pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.429537  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.497606  421205 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.497644  421205 pod_ready.go:81] duration metric: took 68.098616ms waiting for pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.497660  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.503258  421205 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.503280  421205 pod_ready.go:81] duration metric: took 5.611297ms waiting for pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.503290  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7czt" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.607945  421205 pod_ready.go:92] pod "kube-proxy-f7czt" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.607971  421205 pod_ready.go:81] duration metric: took 104.674051ms waiting for pod "kube-proxy-f7czt" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.607986  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.008078  421205 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:11.008126  421205 pod_ready.go:81] duration metric: took 400.130876ms waiting for pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.008144  421205 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.141383  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:13.640883  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:13.014986  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:15.514133  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:17.515916  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:16.140859  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:18.141101  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:20.640092  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:17.154126  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:17.166732  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:17.188369  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.188397  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:17.188456  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:17.208931  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.208958  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:17.209015  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:17.231036  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.231064  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:17.231117  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:17.251517  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.251544  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:17.251609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:17.273246  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.273278  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:17.273329  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:17.294078  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.294106  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:17.294162  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:17.315685  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.315708  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:17.315752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:17.339445  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.339468  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:17.339477  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:17.339488  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:17.373320  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:17.373357  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:17.450406  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:17.450427  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:17.450442  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:17.470514  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:17.470553  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:17.518001  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:17.518029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:17.548549  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.551801  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.566478  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.566729  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.584759  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.587832  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.593226  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595733  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595755  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:17.595804  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:17.595815  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.595822  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.595829  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.595838  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.595847  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595860  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:20.014588  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:22.014673  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:22.640353  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:23.139864  388513 pod_ready.go:81] duration metric: took 4m0.005711416s waiting for pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace to be "Ready" ...
	E0216 17:40:23.139887  388513 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0216 17:40:23.139894  388513 pod_ready.go:38] duration metric: took 4m1.197458921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:40:23.139912  388513 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:40:23.139973  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:23.157852  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:23.157924  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:23.178684  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:23.178767  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:23.196651  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:23.196736  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:23.214872  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:23.214936  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:23.232995  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:23.233093  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:23.251975  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:23.252067  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:23.269953  388513 logs.go:276] 0 containers: []
	W0216 17:40:23.269984  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:23.270043  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:23.287999  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:23.288072  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:23.307186  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:23.307243  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:23.307259  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:23.327277  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:23.327304  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:23.353566  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:23.353607  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:23.410553  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:23.410616  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:23.497408  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:23.497446  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:23.592826  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:23.592857  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:23.626632  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:23.626668  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:23.652222  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:23.652256  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:23.672102  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:23.672131  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:23.693163  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:23.693190  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:23.746041  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:23.746081  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:23.772653  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:23.772690  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:23.795423  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:23.795457  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:24.513521  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:26.515124  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:26.339041  388513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:26.351529  388513 api_server.go:72] duration metric: took 4m7.137437385s to wait for apiserver process to appear ...
	I0216 17:40:26.351556  388513 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:40:26.351633  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:26.369719  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:26.369790  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:26.389630  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:26.389709  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:26.408167  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:26.408256  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:26.425906  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:26.425984  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:26.444534  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:26.444648  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:26.465664  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:26.465740  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:26.483212  388513 logs.go:276] 0 containers: []
	W0216 17:40:26.483244  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:26.483305  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:26.502035  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:26.502118  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:26.522106  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:26.522147  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:26.522158  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:26.610242  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:26.610280  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:26.634292  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:26.634335  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:26.687413  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:26.687450  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:26.709327  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:26.709357  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:26.734394  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:26.734431  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:26.802087  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:26.802122  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:26.901820  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:26.901854  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:26.928476  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:26.928505  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:26.949968  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:26.949998  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:26.990305  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:26.990335  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:27.015341  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:27.015376  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:27.045881  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:27.045914  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:29.572745  388513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0216 17:40:29.577898  388513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0216 17:40:29.578920  388513 api_server.go:141] control plane version: v1.28.4
	I0216 17:40:29.578940  388513 api_server.go:131] duration metric: took 3.227378488s to wait for apiserver health ...
	I0216 17:40:29.578948  388513 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:40:29.579008  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:29.598562  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:29.598650  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:29.617159  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:29.617231  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:29.635295  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:29.635357  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:29.653771  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:29.653859  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:29.671979  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:29.672047  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:29.690510  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:29.690594  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:29.709610  388513 logs.go:276] 0 containers: []
	W0216 17:40:29.709634  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:29.709689  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:29.731047  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:29.731144  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:29.754845  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:29.754902  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:29.754917  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:29.843952  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:29.843989  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:29.864802  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:29.864828  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:29.920686  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:29.920724  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:29.951656  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:29.951695  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:29.978677  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:29.978715  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:30.001402  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:30.001434  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:30.061025  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:30.061057  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:30.088081  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:30.088120  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:30.130971  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:30.131005  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:30.154482  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:30.154518  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:30.249872  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:30.249907  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:30.271318  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:30.271347  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:27.597408  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:27.608054  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:27.625950  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.625980  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:27.626038  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:27.643801  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.643825  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:27.643880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:27.661848  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.661878  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:27.661942  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:27.680910  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.680935  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:27.680984  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:27.698550  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.698575  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:27.698619  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:27.716355  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.716386  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:27.716449  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:27.739573  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.739621  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:27.739686  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:27.760360  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.760383  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:27.760395  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:27.760426  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:27.779114  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:27.779170  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:27.818659  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:27.818687  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:27.841156  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.841308  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.853903  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.855874  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.859522  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.864706  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.874176  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.879404  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:27.879429  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:27.903542  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:27.903580  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:27.964966  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:27.964993  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965008  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:27.965060  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:27.965077  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.965133  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.965146  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.965155  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.965165  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.965175  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965182  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:32.798008  388513 system_pods.go:59] 8 kube-system pods found
	I0216 17:40:32.798035  388513 system_pods.go:61] "coredns-5dd5756b68-qxbsw" [86635938-da74-4ed1-84bc-86c0fe6f2702] Running
	I0216 17:40:32.798040  388513 system_pods.go:61] "etcd-embed-certs-162802" [4ceabd92-09a4-457e-a4df-978436c3a95b] Running
	I0216 17:40:32.798046  388513 system_pods.go:61] "kube-apiserver-embed-certs-162802" [3eed31be-48b2-40c6-95b2-b468485f7b32] Running
	I0216 17:40:32.798051  388513 system_pods.go:61] "kube-controller-manager-embed-certs-162802" [35a7a353-daa8-45d5-9a40-a7b9715036e5] Running
	I0216 17:40:32.798055  388513 system_pods.go:61] "kube-proxy-7w7fm" [a11a21da-10f2-49b5-8b5c-c7b201db94f6] Running
	I0216 17:40:32.798059  388513 system_pods.go:61] "kube-scheduler-embed-certs-162802" [6aab76ff-2e1f-41d5-b007-0daeb8d2da79] Running
	I0216 17:40:32.798065  388513 system_pods.go:61] "metrics-server-57f55c9bc5-mwshp" [fb2ed14c-f295-431c-8223-cd10088ca15a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:40:32.798069  388513 system_pods.go:61] "storage-provisioner" [8b68bcc2-40d3-4b43-855f-5787f2bb54e7] Running
	I0216 17:40:32.798076  388513 system_pods.go:74] duration metric: took 3.219122938s to wait for pod list to return data ...
	I0216 17:40:32.798083  388513 default_sa.go:34] waiting for default service account to be created ...
	I0216 17:40:32.800438  388513 default_sa.go:45] found service account: "default"
	I0216 17:40:32.800461  388513 default_sa.go:55] duration metric: took 2.372693ms for default service account to be created ...
	I0216 17:40:32.800470  388513 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 17:40:32.805286  388513 system_pods.go:86] 8 kube-system pods found
	I0216 17:40:32.805310  388513 system_pods.go:89] "coredns-5dd5756b68-qxbsw" [86635938-da74-4ed1-84bc-86c0fe6f2702] Running
	I0216 17:40:32.805316  388513 system_pods.go:89] "etcd-embed-certs-162802" [4ceabd92-09a4-457e-a4df-978436c3a95b] Running
	I0216 17:40:32.805320  388513 system_pods.go:89] "kube-apiserver-embed-certs-162802" [3eed31be-48b2-40c6-95b2-b468485f7b32] Running
	I0216 17:40:32.805328  388513 system_pods.go:89] "kube-controller-manager-embed-certs-162802" [35a7a353-daa8-45d5-9a40-a7b9715036e5] Running
	I0216 17:40:32.805336  388513 system_pods.go:89] "kube-proxy-7w7fm" [a11a21da-10f2-49b5-8b5c-c7b201db94f6] Running
	I0216 17:40:32.805342  388513 system_pods.go:89] "kube-scheduler-embed-certs-162802" [6aab76ff-2e1f-41d5-b007-0daeb8d2da79] Running
	I0216 17:40:32.805352  388513 system_pods.go:89] "metrics-server-57f55c9bc5-mwshp" [fb2ed14c-f295-431c-8223-cd10088ca15a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:40:32.805389  388513 system_pods.go:89] "storage-provisioner" [8b68bcc2-40d3-4b43-855f-5787f2bb54e7] Running
	I0216 17:40:32.805397  388513 system_pods.go:126] duration metric: took 4.922741ms to wait for k8s-apps to be running ...
	I0216 17:40:32.805407  388513 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 17:40:32.805452  388513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:40:32.817981  388513 system_svc.go:56] duration metric: took 12.566654ms WaitForService to wait for kubelet.
	I0216 17:40:32.818013  388513 kubeadm.go:581] duration metric: took 4m13.603925134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 17:40:32.818038  388513 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:40:32.820989  388513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:40:32.821015  388513 node_conditions.go:123] node cpu capacity is 8
	I0216 17:40:32.821027  388513 node_conditions.go:105] duration metric: took 2.983734ms to run NodePressure ...
	I0216 17:40:32.821039  388513 start.go:228] waiting for startup goroutines ...
	I0216 17:40:32.821047  388513 start.go:233] waiting for cluster config update ...
	I0216 17:40:32.821063  388513 start.go:242] writing updated cluster config ...
	I0216 17:40:32.821410  388513 ssh_runner.go:195] Run: rm -f paused
	I0216 17:40:32.870101  388513 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 17:40:32.872413  388513 out.go:177] * Done! kubectl is now configured to use "embed-certs-162802" cluster and "default" namespace by default
	I0216 17:40:29.013422  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:31.013924  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:33.014487  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:35.515361  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:37.966560  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:37.977313  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:37.994775  455078 logs.go:276] 0 containers: []
	W0216 17:40:37.994798  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:37.994844  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:38.012932  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.012960  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:38.013014  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:38.033792  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.033820  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:38.033880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:38.052523  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.052549  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:38.052610  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:38.072650  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.072705  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:38.072765  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:38.092189  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.092223  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:38.092296  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:38.110333  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.110359  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:38.110404  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:38.128992  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.129027  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:38.129037  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:38.129048  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:38.167101  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:38.167135  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:38.186657  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.191871  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.201457  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.207565  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.209614  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.217808  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.224819  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:38.224859  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:38.248754  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:38.248833  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:38.311199  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:38.311223  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:38.311236  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:38.327036  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327063  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:38.327121  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:38.327132  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.327140  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.327148  455078 out.go:239]   Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.327154  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.327160  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.327169  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327174  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:38.014130  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:40.014835  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:42.015514  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:44.514285  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:47.015104  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:48.327861  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:48.339194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:48.360648  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.360673  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:48.360728  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:48.378486  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.378513  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:48.378557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:48.398639  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.398666  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:48.398712  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:48.417793  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.417817  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:48.417873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:48.435529  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.435552  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:48.435602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:48.457049  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.457082  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:48.457155  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:48.477801  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.477826  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:48.477868  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:48.496234  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.496257  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:48.496265  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:48.496278  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:48.538184  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:48.538212  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:48.564633  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.566786  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.576446  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.585675  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.585865  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.588023  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.601821  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:48.601858  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:48.626705  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:48.626746  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:48.803956  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:48.803984  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:48.803997  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:48.820684  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820710  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:48.820755  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:48.820765  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.820790  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.820799  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.820807  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.820814  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.820820  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820826  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:49.514565  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:52.014075  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:54.515655  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:57.014540  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:58.821518  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:58.832683  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:58.850170  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.850200  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:58.850256  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:58.868305  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.868327  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:58.868367  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:58.887531  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.887556  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:58.887602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:58.905145  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.905176  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:58.905229  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:58.923499  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.923530  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:58.923587  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:58.941547  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.941581  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:58.941629  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:58.959233  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.959258  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:58.959309  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:58.977281  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.977302  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:58.977313  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:58.977323  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:59.015956  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:59.015983  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:59.040126  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.040302  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:59.042282  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.056437  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.062909  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.065045  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.065415  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.073540  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:59.073574  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:59.097435  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:59.097482  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:59.159801  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:59.159827  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:59.159839  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:59.176592  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176621  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:59.176676  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:59.176684  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176693  455078 out.go:239]   Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.176709  455078 out.go:239]   Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176718  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.176728  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.176735  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176740  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:59.514085  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:01.514563  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:04.014812  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:06.514468  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:09.178430  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:09.189176  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:09.207320  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.207345  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:09.207400  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:09.225002  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.225033  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:09.225096  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:09.243928  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.243959  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:09.244013  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:09.262481  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.262505  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:09.262559  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:09.279969  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.279992  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:09.280049  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:09.297754  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.297777  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:09.297825  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:09.315771  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.315800  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:09.315852  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:09.333460  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.333491  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:09.333500  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:09.333511  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:09.355521  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.362102  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.364251  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.364640  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.381046  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.388010  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.388320  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.390233  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.392031  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:09.392060  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:09.417243  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:09.417287  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:09.478675  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:09.478700  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:09.478711  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:09.495170  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:09.495201  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:09.534342  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534369  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:09.534418  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:09.534429  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.534440  455078 out.go:239]   Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.534451  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.534457  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.534469  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.534474  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534482  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:09.014641  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:11.513750  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:13.513808  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:16.014153  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:19.535038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:19.545504  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:19.563494  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.563519  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:19.563579  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:19.581616  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.581645  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:19.581692  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:19.599875  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.599906  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:19.599956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:19.618224  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.618251  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:19.618310  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:19.637362  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.637392  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:19.637442  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:19.655724  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.655755  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:19.655800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:19.672560  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.672588  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:19.672636  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:19.690212  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.690239  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:19.690251  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:19.690265  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:19.719464  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.726630  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.726900  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.728877  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.741983  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.745889  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.748644  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:19.748681  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:19.774437  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:19.774473  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:19.836688  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:19.836707  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:19.836719  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:19.852476  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:19.852506  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:19.889446  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889484  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:19.889541  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:19.889559  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.889574  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.889591  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.889607  455078 out.go:239]   Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.889625  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.889639  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889653  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:18.514473  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:21.014774  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:23.513640  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:25.514552  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:27.514673  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:29.891027  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:29.901935  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:29.919667  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.919697  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:29.919757  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:29.937792  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.937823  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:29.937873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:29.955488  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.955513  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:29.955557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:29.973119  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.973147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:29.973194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:29.991607  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.991634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:29.991681  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:30.010229  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.010258  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:30.010330  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:30.029419  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.029446  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:30.029496  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:30.047844  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.047870  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:30.047882  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:30.047900  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:30.108010  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:30.108031  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:30.108042  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:30.124087  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:30.124121  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:30.161506  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:30.161532  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:30.182528  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.182822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.184822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.197489  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.201168  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.204811  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.208217  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.216614  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.220063  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.221825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:30.221850  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:30.245800  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245840  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:30.245897  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:30.245910  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.245938  455078 out.go:239]   Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.245947  455078 out.go:239]   Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.245955  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.245969  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.245977  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245986  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:30.013845  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:32.014461  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:34.513494  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:36.513808  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:40.247341  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:40.258231  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:40.277091  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.277115  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:40.277170  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:40.295536  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.295559  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:40.295604  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:40.312997  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.313026  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:40.313071  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:40.330525  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.330546  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:40.330589  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:40.348713  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.348742  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:40.348800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:40.366775  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.366797  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:40.366841  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:40.385643  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.385663  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:40.385707  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:40.403427  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.403450  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:40.403459  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:40.403470  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:40.439890  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:40.439928  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:40.462737  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.466398  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.470658  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.479019  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.482450  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.487453  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.489577  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.500740  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.501258  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:40.501276  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:40.525173  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:40.525207  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:40.587517  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:40.587539  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:40.587555  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:40.603528  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603556  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:40.603611  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:40.603623  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.603636  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.603652  455078 out.go:239]   Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.603661  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.603670  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.603681  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603689  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:38.514785  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:41.013816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:43.013997  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:45.514713  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:50.604423  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:50.614773  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:50.632046  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.632072  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:50.632120  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:50.649668  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.649705  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:50.649752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:50.667298  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.667324  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:50.667369  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:50.684964  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.684985  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:50.685058  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:50.702294  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.702315  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:50.702372  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:50.719213  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.719242  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:50.719298  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:50.739288  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.739316  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:50.739379  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:50.758688  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.758711  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:50.758721  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:50.758733  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:50.778773  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.784194  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.786483  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.798383  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.801984  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.805643  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.807814  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.817121  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:50.817159  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:50.840704  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:50.840735  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:50.902600  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:50.902624  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:50.902661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:50.920132  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:50.920249  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:50.959025  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959061  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:50.959128  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:50.959146  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.959160  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.959176  455078 out.go:239]   Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.959188  455078 out.go:239]   Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.959198  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.959208  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959218  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:48.014075  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:50.015072  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:52.514476  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:54.514722  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:56.515088  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:00.960497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:00.971191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:00.988983  455078 logs.go:276] 0 containers: []
	W0216 17:42:00.989007  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:00.989051  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:01.007472  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.007502  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:01.007549  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:01.027235  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.027266  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:01.027328  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:01.045396  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.045418  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:01.045466  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:01.063608  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.063634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:01.063676  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:01.081846  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.081875  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:01.081933  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:01.100572  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.100605  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:01.100656  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:01.118064  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.118093  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:01.118107  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:01.118120  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:01.178472  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:01.178494  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:01.178510  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:01.194152  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:01.194180  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:01.229057  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:01.229088  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:01.252846  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.256323  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.259747  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.261761  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.276222  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.278237  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.281914  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.283854  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.288825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:01.288847  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:59.014730  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:01.019158  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:01.312195  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312226  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:01.312273  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:01.312283  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.312293  455078 out.go:239]   Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.312302  455078 out.go:239]   Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.312314  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.312323  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.312330  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312336  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:03.514449  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:06.013458  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:08.014136  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:10.014502  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:12.514123  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:11.313806  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:11.324599  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:11.342926  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.342950  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:11.343009  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:11.361832  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.361863  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:11.361913  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:11.380388  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.380413  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:11.380463  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:11.398531  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.398555  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:11.398609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:11.416599  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.416633  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:11.416691  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:11.437302  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.437329  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:11.437381  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:11.455500  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.455526  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:11.455588  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:11.473447  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.473472  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:11.473483  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:11.473499  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:11.489109  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:11.489137  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:11.528617  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:11.528657  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:11.554793  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.556844  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.560487  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.562461  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.577032  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.577534  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.584091  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.584767  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:11.584786  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:11.607897  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:11.607930  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:11.670359  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:11.670384  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670396  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:11.670447  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:11.670457  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.670467  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.670473  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.670480  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.670488  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.670494  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670502  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:14.514921  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:17.013694  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:19.014567  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:21.513547  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:21.671639  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:21.682566  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:21.700727  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.700751  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:21.700797  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:21.718547  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.718575  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:21.718638  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:21.738352  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.738376  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:21.738422  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:21.758981  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.759006  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:21.759060  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:21.779871  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.779920  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:21.779989  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:21.799706  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.799736  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:21.799787  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:21.817228  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.817255  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:21.817308  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:21.836951  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.836983  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:21.836997  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:21.837012  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:21.872431  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:21.872957  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:21.879827  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:21.881920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:21.893080  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:21.899070  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:21.899090  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:21.922375  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:21.922425  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:21.984024  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:21.984044  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:21.984056  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:22.000242  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:22.000273  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:22.038265  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038288  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:22.038331  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:22.038355  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:22.038363  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:22.038374  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:22.038390  455078 out.go:239]   Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:22.038401  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:22.038411  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038419  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:23.514418  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:26.014295  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:28.014697  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:30.514199  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:32.039537  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:32.050189  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:32.067646  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.067676  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:32.067745  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:32.087169  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.087213  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:32.087271  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:32.105465  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.105488  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:32.105546  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:32.123431  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.123464  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:32.123516  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:32.141039  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.141064  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:32.141122  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:32.159484  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.159515  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:32.159580  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:32.177162  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.177188  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:32.177241  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:32.194247  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.194275  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:32.194287  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:32.194305  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:32.253876  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:32.253898  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:32.253912  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:32.270178  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:32.270213  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:32.305859  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:32.305889  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:32.328308  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.330319  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.341106  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.347725  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.352568  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.354544  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.364250  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.364558  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:32.364575  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:32.389634  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389668  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:32.389721  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:32.389734  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.389744  455078 out.go:239]   Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.389754  455078 out.go:239]   Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.389762  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.389781  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.389791  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389801  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:33.014200  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:35.514231  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:38.013671  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:40.014577  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:42.514921  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:42.390328  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:42.401227  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:42.419362  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.419393  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:42.419438  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:42.437451  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.437495  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:42.437554  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:42.455185  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.455206  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:42.455252  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:42.472418  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.472439  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:42.472493  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:42.489791  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.489818  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:42.489867  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:42.507633  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.507662  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:42.507716  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:42.526869  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.526889  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:42.526943  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:42.544969  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.544999  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:42.545011  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:42.545026  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:42.570906  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.575920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.577964  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.587726  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.592654  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.600845  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.602832  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.604949  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:42.604968  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:42.628966  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:42.629003  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:42.688286  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:42.688314  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:42.688331  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:42.704424  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:42.704453  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:42.742407  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742433  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:42.742493  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:42.742501  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.742508  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.742517  455078 out.go:239]   Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.742552  455078 out.go:239]   Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.742559  455078 out.go:239]   Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.742565  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742570  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:45.013590  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:47.014437  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:49.014625  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:51.018229  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:52.743937  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:52.756372  455078 kubeadm.go:640] restartCluster took 4m18.22848465s
	W0216 17:42:52.756471  455078 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0216 17:42:52.756503  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:42:53.532102  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:42:53.543197  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:42:53.551917  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:42:53.552015  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:42:53.560427  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:42:53.560470  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:42:53.726076  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:42:53.785027  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:42:53.785263  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:42:53.865914  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:42:53.514433  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:55.514763  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:58.013778  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:00.014590  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:02.014950  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:04.514618  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:07.013926  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:09.014181  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:11.014763  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:13.515121  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:16.013918  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:18.514051  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:20.514306  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:22.514811  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:25.013525  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:27.013742  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:29.014501  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:31.513545  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:34.014031  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:36.014709  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:38.513816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:40.514446  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:43.013427  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:45.014134  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:47.513936  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:49.514346  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:51.514431  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:54.014365  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:56.513375  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:58.513426  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:00.513773  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:02.514281  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:04.514604  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:07.013705  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:09.014230  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:11.013826  421205 pod_ready.go:81] duration metric: took 4m0.005667428s waiting for pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace to be "Ready" ...
	E0216 17:44:11.013856  421205 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0216 17:44:11.013866  421205 pod_ready.go:38] duration metric: took 4m1.603526555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:44:11.013886  421205 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:44:11.013951  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:11.033314  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:11.033392  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:11.051441  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:11.051513  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:11.069761  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:11.069845  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:11.088208  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:11.088289  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:11.106422  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:11.106498  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:11.124867  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:11.124958  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:11.142118  421205 logs.go:276] 0 containers: []
	W0216 17:44:11.142141  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:11.142191  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:11.160173  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:11.160255  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:11.177989  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:11.178048  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:11.178059  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:11.198870  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:11.198912  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:11.239691  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:11.239723  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:11.260526  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:11.260555  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:11.289289  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:11.289322  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:11.309043  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:11.309071  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:11.332161  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:11.332193  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:11.358612  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:11.358647  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:11.416041  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:11.416086  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:11.509698  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:11.509732  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:11.536694  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:11.536723  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:11.592923  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:11.592963  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:11.687551  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:11.687590  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:14.209355  421205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:44:14.220881  421205 api_server.go:72] duration metric: took 4m7.100345078s to wait for apiserver process to appear ...
	I0216 17:44:14.220908  421205 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:44:14.220988  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:14.239026  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:14.239106  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:14.257252  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:14.257337  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:14.276104  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:14.276225  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:14.293861  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:14.293948  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:14.311926  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:14.312006  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:14.330376  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:14.330464  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:14.348311  421205 logs.go:276] 0 containers: []
	W0216 17:44:14.348340  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:14.348395  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:14.368016  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:14.368086  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:14.387308  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:14.387345  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:14.387355  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:14.476907  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:14.476947  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:14.500367  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:14.500402  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:14.526205  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:14.526246  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:14.552875  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:14.552907  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:14.611707  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:14.611748  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:14.632591  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:14.632617  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:14.688302  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:14.688332  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:14.791155  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:14.791187  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:14.821565  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:14.821602  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:14.842459  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:14.842490  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:14.862520  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:14.862546  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:14.884028  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:14.884059  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:17.424507  421205 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0216 17:44:17.428688  421205 api_server.go:279] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0216 17:44:17.429696  421205 api_server.go:141] control plane version: v1.28.4
	I0216 17:44:17.429714  421205 api_server.go:131] duration metric: took 3.208801048s to wait for apiserver health ...
	I0216 17:44:17.429722  421205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:44:17.429777  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:17.448521  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:17.448634  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:17.469956  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:17.470035  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:17.487435  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:17.487517  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:17.509228  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:17.509299  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:17.527465  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:17.527538  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:17.545275  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:17.545352  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:17.564370  421205 logs.go:276] 0 containers: []
	W0216 17:44:17.564392  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:17.564435  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:17.583087  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:17.583149  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:17.601835  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:17.601871  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:17.601882  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:17.629168  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:17.629205  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:17.649281  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:17.649307  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:17.670885  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:17.670920  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:17.725719  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:17.725751  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:17.765369  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:17.765410  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:17.799148  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:17.799187  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:17.826035  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:17.826071  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:17.847880  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:17.847914  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:17.868797  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:17.868824  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:17.926736  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:17.926772  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:18.020846  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:18.020884  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:18.046687  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:18.046726  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:20.647381  421205 system_pods.go:59] 8 kube-system pods found
	I0216 17:44:20.647412  421205 system_pods.go:61] "coredns-5dd5756b68-6dd5s" [64070971-4c96-4bae-8c6a-e661926c6fc2] Running
	I0216 17:44:20.647420  421205 system_pods.go:61] "etcd-default-k8s-diff-port-816748" [08890543-29f8-4ada-8e5b-9f6d7867eb3c] Running
	I0216 17:44:20.647426  421205 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-816748" [86f78562-715d-466e-94c1-b3a76772ec12] Running
	I0216 17:44:20.647432  421205 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-816748" [5f311605-3fd2-4f1e-b0fa-cab39e6a86d2] Running
	I0216 17:44:20.647437  421205 system_pods.go:61] "kube-proxy-f7czt" [0f96b293-f1b0-42e8-b281-afae41342cf9] Running
	I0216 17:44:20.647442  421205 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-816748" [1366c3ec-1790-4ff3-b3aa-bb9dfe5b719a] Running
	I0216 17:44:20.647452  421205 system_pods.go:61] "metrics-server-57f55c9bc5-tdw8t" [5b4055e5-de9d-40e3-af47-591d406323be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:44:20.647460  421205 system_pods.go:61] "storage-provisioner" [756405eb-38f6-4c3e-9834-ef4f519f42ef] Running
	I0216 17:44:20.647471  421205 system_pods.go:74] duration metric: took 3.217742634s to wait for pod list to return data ...
	I0216 17:44:20.647482  421205 default_sa.go:34] waiting for default service account to be created ...
	I0216 17:44:20.649822  421205 default_sa.go:45] found service account: "default"
	I0216 17:44:20.649843  421205 default_sa.go:55] duration metric: took 2.354783ms for default service account to be created ...
	I0216 17:44:20.649851  421205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 17:44:20.654326  421205 system_pods.go:86] 8 kube-system pods found
	I0216 17:44:20.654349  421205 system_pods.go:89] "coredns-5dd5756b68-6dd5s" [64070971-4c96-4bae-8c6a-e661926c6fc2] Running
	I0216 17:44:20.654354  421205 system_pods.go:89] "etcd-default-k8s-diff-port-816748" [08890543-29f8-4ada-8e5b-9f6d7867eb3c] Running
	I0216 17:44:20.654359  421205 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-816748" [86f78562-715d-466e-94c1-b3a76772ec12] Running
	I0216 17:44:20.654364  421205 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-816748" [5f311605-3fd2-4f1e-b0fa-cab39e6a86d2] Running
	I0216 17:44:20.654368  421205 system_pods.go:89] "kube-proxy-f7czt" [0f96b293-f1b0-42e8-b281-afae41342cf9] Running
	I0216 17:44:20.654372  421205 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-816748" [1366c3ec-1790-4ff3-b3aa-bb9dfe5b719a] Running
	I0216 17:44:20.654379  421205 system_pods.go:89] "metrics-server-57f55c9bc5-tdw8t" [5b4055e5-de9d-40e3-af47-591d406323be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:44:20.654388  421205 system_pods.go:89] "storage-provisioner" [756405eb-38f6-4c3e-9834-ef4f519f42ef] Running
	I0216 17:44:20.654398  421205 system_pods.go:126] duration metric: took 4.542164ms to wait for k8s-apps to be running ...
	I0216 17:44:20.654408  421205 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 17:44:20.654451  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:44:20.665886  421205 system_svc.go:56] duration metric: took 11.471113ms WaitForService to wait for kubelet.
	I0216 17:44:20.665915  421205 kubeadm.go:581] duration metric: took 4m13.545386541s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 17:44:20.665948  421205 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:44:20.668869  421205 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:44:20.668894  421205 node_conditions.go:123] node cpu capacity is 8
	I0216 17:44:20.668909  421205 node_conditions.go:105] duration metric: took 2.9556ms to run NodePressure ...
	I0216 17:44:20.668922  421205 start.go:228] waiting for startup goroutines ...
	I0216 17:44:20.668931  421205 start.go:233] waiting for cluster config update ...
	I0216 17:44:20.668948  421205 start.go:242] writing updated cluster config ...
	I0216 17:44:20.669255  421205 ssh_runner.go:195] Run: rm -f paused
	I0216 17:44:20.718390  421205 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 17:44:20.720392  421205 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-816748" cluster and "default" namespace by default
	I0216 17:46:54.897764  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:46:54.897901  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:46:54.900889  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:54.900952  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:54.901057  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:54.901118  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:54.901164  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:54.901258  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:54.901344  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:54.901414  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:54.901483  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:54.901549  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:54.901599  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:54.901645  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:54.901736  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:54.901873  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:54.902013  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:54.902166  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:54.902269  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:54.902349  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:54.902439  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:54.905049  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:54.905136  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:54.905209  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:54.905290  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:54.905360  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:54.905435  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:54.905485  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:54.905549  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:54.905608  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:54.905668  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:54.905730  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:54.905789  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:54.905857  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:54.905899  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:54.905946  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:54.905996  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:54.906054  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:54.906113  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:54.908366  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:54.908451  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:54.908521  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:54.908576  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:54.908644  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:54.908802  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:46:54.908855  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:46:54.908861  455078 kubeadm.go:322] 
	I0216 17:46:54.908893  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:46:54.908926  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:46:54.908932  455078 kubeadm.go:322] 
	I0216 17:46:54.908967  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:46:54.908996  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:46:54.909083  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:46:54.909090  455078 kubeadm.go:322] 
	I0216 17:46:54.909170  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:46:54.909199  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:46:54.909225  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:46:54.909231  455078 kubeadm.go:322] 
	I0216 17:46:54.909312  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:46:54.909392  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:46:54.909464  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:46:54.909509  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:46:54.909573  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:46:54.909628  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0216 17:46:54.909766  455078 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:46:54.909815  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:46:55.653997  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:46:55.665110  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:46:55.665171  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:46:55.673735  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:46:55.673786  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:46:55.722375  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:55.722432  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:55.894761  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:55.894856  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:55.894909  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:55.894973  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:55.895037  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:55.895101  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:55.895159  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:55.895220  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:55.895285  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:55.895341  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:55.967714  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:55.967839  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:55.967958  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:56.138307  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:56.139389  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:56.146473  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:56.222590  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:56.225987  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:56.226094  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:56.226182  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:56.226277  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:56.226364  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:56.226459  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:56.226532  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:56.226620  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:56.226731  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:56.226833  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:56.226958  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:56.227020  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:56.227109  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:56.394947  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:56.547719  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:56.909016  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:57.118906  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:57.119703  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:57.121695  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:57.121837  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:57.126402  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:57.127880  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:57.128910  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:57.132135  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:47:37.132515  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:50:57.133720  455078 kubeadm.go:322] 
	I0216 17:50:57.133814  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:50:57.133878  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:50:57.133889  455078 kubeadm.go:322] 
	I0216 17:50:57.133928  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:50:57.133973  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:50:57.134138  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:50:57.134168  455078 kubeadm.go:322] 
	I0216 17:50:57.134317  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:50:57.134386  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:50:57.134454  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:50:57.134477  455078 kubeadm.go:322] 
	I0216 17:50:57.134600  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:50:57.134682  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:50:57.134772  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:50:57.134854  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:50:57.134948  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:50:57.134989  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:50:57.136987  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:50:57.137100  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:50:57.137301  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:50:57.137405  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:50:57.137479  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:50:57.137562  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:50:57.137603  455078 kubeadm.go:406] StartCluster complete in 12m22.638718493s
	I0216 17:50:57.137690  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:50:57.155966  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.155994  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:50:57.156042  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:50:57.173312  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.173339  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:50:57.173395  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:50:57.190861  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.190885  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:50:57.190939  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:50:57.208223  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.208245  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:50:57.208292  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:50:57.224808  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.224835  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:50:57.224887  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:50:57.242004  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.242026  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:50:57.242066  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:50:57.258500  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.258522  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:50:57.258562  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:50:57.275390  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.275415  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:50:57.275427  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:50:57.275443  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:50:57.336885  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:50:57.336911  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:50:57.336929  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:50:57.354268  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:50:57.354298  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:50:57.388996  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:50:57.389022  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:50:57.410914  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.418232  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.420274  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.423841  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:43 old-k8s-version-478853 kubelet[11238]: E0216 17:50:43.867344   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:50:57.433982  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:49 old-k8s-version-478853 kubelet[11238]: E0216 17:50:49.865840   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.437556  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:51 old-k8s-version-478853 kubelet[11238]: E0216 17:50:51.865653   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.446171  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.867671   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.446448  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.868767   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:50:57.447246  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:50:57.447271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 17:50:57.472300  455078 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:50:57.472350  455078 out.go:239] * 
	W0216 17:50:57.472421  455078 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.472446  455078 out.go:239] * 
	W0216 17:50:57.473265  455078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:50:57.475359  455078 out.go:177] X Problems detected in kubelet:
	I0216 17:50:57.477187  455078 out.go:177]   Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:50:57.478538  455078 out.go:177]   Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:50:57.479997  455078 out.go:177]   Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:50:57.482565  455078 out.go:177] 
	W0216 17:50:57.483906  455078 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.483958  455078 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:50:57.483983  455078 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:50:57.485600  455078 out.go:177] 
	
	
	==> Docker <==
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.592980618Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.594584628Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.595536484Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:30.645142910Z" level=info msg="Starting up"
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:30.665356524Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:38:32 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:32.943709848Z" level=info msg="Loading containers: start."
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.046603047Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.084755081Z" level=info msg="Loading containers: done."
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.093893706Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.093969854Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.114320129Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.114404690Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:38:33 old-k8s-version-478853 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.297314929Z" level=info msg="ignoring event" container=e2af60e34ffaad5efd27998301557aa7bc6eafb37879f3641ec191f87756d240 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.365203834Z" level=info msg="ignoring event" container=501f90c2772906bc6d8ded9653807e77cb8a8a92587ad8fe1491c9b9c0875e6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.429498715Z" level=info msg="ignoring event" container=f872d0e5597d4ff659d8ce99042c5e1e430481e415e0287b6c0b970158121faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.494454551Z" level=info msg="ignoring event" container=f19e6b2d39c6061bb413cdfe4fadfa71b989ba84c976ad403d332b29446cb4fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.423953515Z" level=info msg="ignoring event" container=b1b1b40b37050624d9c0b249cdca8e460ccce350d500ee9689e7d0b2f1a6d93d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.490441340Z" level=info msg="ignoring event" container=5cae44ae4a1b017120f0ee3d1e2fb8e897a46c84d4f5ecb92082ff2491dee106 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.553837099Z" level=info msg="ignoring event" container=cdda83e2154a7e2eb9b7f5b60fd5ba82cffbef69661670436c524e6d68f1aa40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.620386107Z" level=info msg="ignoring event" container=6783776bc128111b7a739f1f7b7bbc1ce484483b75360855dc6ec8cbeecc9c7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e a8 fe f3 03 85 08 06
	[Feb16 17:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba d4 5b d6 50 19 08 06
	[Feb16 17:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 c0 9b 14 00 15 08 06
	[Feb16 17:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 bc 63 d6 82 6d 08 06
	[Feb16 17:35] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 2e 9d 9f f9 35 08 06
	[Feb16 17:36] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 58 28 6b 8d e8 08 06
	[  +2.713951] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 dc a2 ed 93 ee 08 06
	[  +9.193699] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca a1 30 ea 88 7e 08 06
	[  +0.019629] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 7a 9f 93 dd d6 08 06
	[Feb16 17:37] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e de b5 78 ba 0d 08 06
	[Feb16 17:38] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea c2 b1 8d 0f 93 08 06
	[Feb16 17:40] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 d2 9f 93 96 cc 08 06
	[ +10.846771] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 5a c7 81 70 16 08 06
	
	
	==> kernel <==
	 17:50:58 up  1:33,  0 users,  load average: 0.00, 0.23, 0.99
	Linux old-k8s-version-478853 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.151312   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.197751   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.251493   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.351663   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.398775   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.451853   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.552049   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.598578   11238 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.652230   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.752387   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.799154   11238 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.852564   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.952770   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:57 old-k8s-version-478853 kubelet[11238]: E0216 17:50:57.999026   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.052915   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.153125   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.198557   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.253330   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.353560   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.399548   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.453776   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.535125   11238 event.go:246] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: dial tcp 192.168.76.2:8443: connect: connection refused' (may retry after sleeping)
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.553961   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.599370   11238 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:50:58 old-k8s-version-478853 kubelet[11238]: E0216 17:50:58.654147   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 2 (285.009372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-478853" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (757.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (423.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:51:12.841050   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:52:18.638039   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:52:30.193295   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:53:05.515531   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:53:15.900983   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:53:37.749549   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:54:13.710780   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:54:16.737488   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:54:26.469062   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:54:41.991076   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/default-k8s-diff-port-816748/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:55:12.008793   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:55:26.829862   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:55:40.857059   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:55:45.303118   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:56:12.841397   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:56:18.946068   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:56:35.059802   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:56:49.874244   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:57:18.637822   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E0216 17:57:30.193566   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 2 (280.419713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-478853" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-478853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-478853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde",
	        "Created": "2024-02-16T17:28:05.344964673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455353,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:38:21.761755666Z",
	            "FinishedAt": "2024-02-16T17:38:20.210098294Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/hosts",
	        "LogPath": "/var/lib/docker/containers/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde/74b66ed59b2b04b54f2d2a9f2d5252a296723d9f3a251b88b2bb07496976cfde-json.log",
	        "Name": "/old-k8s-version-478853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-478853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-478853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083-init/diff:/var/lib/docker/overlay2/399457765d8a71bf3b9151eb69e824afe917f6f0e4f38614a9c00a72b38b806a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8b746ed1c7e2739d6ff7faf8fc718f1484cabd23b642a71f94e72690435a083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-478853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-478853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-478853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-478853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "199c16a2a4e5610e66ab3ac8041b86ba652305b9a0affd9b2a79a513df594615",
	            "SandboxKey": "/var/run/docker/netns/199c16a2a4e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-478853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74b66ed59b2b",
	                        "old-k8s-version-478853"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "166a9b0cbcbad81945e5ddf7b3ae3a6fed94ef48dba3d7d6ceb648c91593d0fb",
	                    "EndpointID": "cb8d24629aaf63d27bbb12983ffcbd66ccc33e142bfce98dd2d283368110e8a2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-478853",
	                        "74b66ed59b2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 2 (275.230249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-478853 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p no-preload-408847                                   | no-preload-408847            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	| delete  | -p no-preload-408847                                   | no-preload-408847            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	| start   | -p newest-cni-398474 --memory=2200 --alsologtostderr   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-398474             | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-398474                  | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-398474 --memory=2200 --alsologtostderr   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-398474 image list                           | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	| delete  | -p newest-cni-398474                                   | newest-cni-398474            | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	| stop    | -p old-k8s-version-478853                              | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-478853             | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC | 16 Feb 24 17:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-478853                              | old-k8s-version-478853       | jenkins | v1.32.0 | 16 Feb 24 17:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-162802 image list                          | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	| delete  | -p embed-certs-162802                                  | embed-certs-162802           | jenkins | v1.32.0 | 16 Feb 24 17:40 UTC | 16 Feb 24 17:40 UTC |
	| image   | default-k8s-diff-port-816748                           | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-816748 | jenkins | v1.32.0 | 16 Feb 24 17:44 UTC | 16 Feb 24 17:44 UTC |
	|         | default-k8s-diff-port-816748                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 17:38:21
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 17:38:21.303089  455078 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:38:21.303345  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303354  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:38:21.303359  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:38:21.303563  455078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:38:21.304200  455078 out.go:298] Setting JSON to false
	I0216 17:38:21.305432  455078 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4848,"bootTime":1708100254,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 17:38:21.305506  455078 start.go:139] virtualization: kvm guest
	I0216 17:38:21.307760  455078 out.go:177] * [old-k8s-version-478853] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 17:38:21.310010  455078 notify.go:220] Checking for updates...
	I0216 17:38:21.310012  455078 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:38:21.311432  455078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:38:21.312916  455078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:21.314294  455078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 17:38:21.315598  455078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 17:38:21.316976  455078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:38:21.318997  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:21.321025  455078 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0216 17:38:21.322407  455078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:38:21.345628  455078 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:38:21.345735  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.400126  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.390220676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.400280  455078 docker.go:295] overlay module found
	I0216 17:38:21.402314  455078 out.go:177] * Using the docker driver based on existing profile
	I0216 17:38:21.403808  455078 start.go:299] selected driver: docker
	I0216 17:38:21.403824  455078 start.go:903] validating driver "docker" against &{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.403921  455078 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:38:21.404778  455078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:38:21.460365  455078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:38:21.451261069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:38:21.460674  455078 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:38:21.460728  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:21.460750  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:21.460764  455078 start_flags.go:323] config:
	{Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:21.464108  455078 out.go:177] * Starting control plane node old-k8s-version-478853 in cluster old-k8s-version-478853
	I0216 17:38:21.465746  455078 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:38:21.467261  455078 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:38:21.468714  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:21.468746  455078 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:38:21.468770  455078 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 17:38:21.468818  455078 cache.go:56] Caching tarball of preloaded images
	I0216 17:38:21.468909  455078 preload.go:174] Found /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 17:38:21.468919  455078 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:38:21.469017  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.486258  455078 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:38:21.486284  455078 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:38:21.486302  455078 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:38:21.486342  455078 start.go:365] acquiring machines lock for old-k8s-version-478853: {Name:mkde5e52743909de9e75497b3ed0dd80f14fc0ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:38:21.486408  455078 start.go:369] acquired machines lock for "old-k8s-version-478853" in 40.03µs
	I0216 17:38:21.486432  455078 start.go:96] Skipping create...Using existing machine configuration
	I0216 17:38:21.486439  455078 fix.go:54] fixHost starting: 
	I0216 17:38:21.486680  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.504783  455078 fix.go:102] recreateIfNeeded on old-k8s-version-478853: state=Stopped err=<nil>
	W0216 17:38:21.504825  455078 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 17:38:21.506811  455078 out.go:177] * Restarting existing docker container for "old-k8s-version-478853" ...
	I0216 17:38:18.761435  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.246854  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.140505  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:23.640145  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:25.640932  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:21.508568  455078 cli_runner.go:164] Run: docker start old-k8s-version-478853
	I0216 17:38:21.769480  455078 cli_runner.go:164] Run: docker container inspect old-k8s-version-478853 --format={{.State.Status}}
	I0216 17:38:21.789204  455078 kic.go:430] container "old-k8s-version-478853" state is running.
	I0216 17:38:21.789622  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:21.808063  455078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/config.json ...
	I0216 17:38:21.808370  455078 machine.go:88] provisioning docker machine ...
	I0216 17:38:21.808408  455078 ubuntu.go:169] provisioning hostname "old-k8s-version-478853"
	I0216 17:38:21.808455  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:21.826185  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:21.826686  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:21.826710  455078 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-478853 && echo "old-k8s-version-478853" | sudo tee /etc/hostname
	I0216 17:38:21.827431  455078 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44460->127.0.0.1:33102: read: connection reset by peer
	I0216 17:38:24.971815  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-478853
	
	I0216 17:38:24.971897  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:24.989390  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:24.989714  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:24.989739  455078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-478853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-478853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-478853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:38:25.120712  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.120747  455078 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
	I0216 17:38:25.120784  455078 ubuntu.go:177] setting up certificates
	I0216 17:38:25.120795  455078 provision.go:83] configureAuth start
	I0216 17:38:25.120844  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:25.140311  455078 provision.go:138] copyHostCerts
	I0216 17:38:25.140392  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
	I0216 17:38:25.140404  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
	I0216 17:38:25.140473  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
	I0216 17:38:25.140575  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
	I0216 17:38:25.140585  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
	I0216 17:38:25.140611  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
	I0216 17:38:25.140678  455078 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
	I0216 17:38:25.140685  455078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
	I0216 17:38:25.140706  455078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
	I0216 17:38:25.140759  455078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-478853 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-478853]
	I0216 17:38:25.293113  455078 provision.go:172] copyRemoteCerts
	I0216 17:38:25.293171  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:38:25.293215  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.311679  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:25.405147  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 17:38:25.429153  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:38:25.454627  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:38:25.477710  455078 provision.go:86] duration metric: configureAuth took 356.904526ms
	I0216 17:38:25.477736  455078 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:38:25.477903  455078 config.go:182] Loaded profile config "old-k8s-version-478853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:38:25.477947  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.495763  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.496095  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.496108  455078 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:38:25.628939  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:38:25.628966  455078 ubuntu.go:71] root file system type: overlay
	I0216 17:38:25.629075  455078 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:38:25.629128  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.647033  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.647356  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.647419  455078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:38:25.796668  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:38:25.796764  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.815271  455078 main.go:141] libmachine: Using SSH client type: native
	I0216 17:38:25.815583  455078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 17:38:25.815601  455078 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:38:25.957528  455078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:38:25.957560  455078 machine.go:91] provisioned docker machine in 4.149165092s
	I0216 17:38:25.957575  455078 start.go:300] post-start starting for "old-k8s-version-478853" (driver="docker")
	I0216 17:38:25.957589  455078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:38:25.957706  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:38:25.957761  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:25.976195  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.069365  455078 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:38:26.072831  455078 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:38:26.072871  455078 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:38:26.072884  455078 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:38:26.072893  455078 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:38:26.072906  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
	I0216 17:38:26.072974  455078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
	I0216 17:38:26.073063  455078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
	I0216 17:38:26.073181  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:38:26.081215  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:26.103318  455078 start.go:303] post-start completed in 145.726596ms
	I0216 17:38:26.103402  455078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:38:26.103446  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.121271  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.213029  455078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:38:26.217252  455078 fix.go:56] fixHost completed within 4.730808663s
	I0216 17:38:26.217282  455078 start.go:83] releasing machines lock for "old-k8s-version-478853", held for 4.730859928s
	I0216 17:38:26.217359  455078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-478853
	I0216 17:38:26.236067  455078 ssh_runner.go:195] Run: cat /version.json
	I0216 17:38:26.236096  455078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:38:26.236126  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.236181  455078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-478853
	I0216 17:38:26.255208  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.256650  455078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/old-k8s-version-478853/id_rsa Username:docker}
	I0216 17:38:26.432006  455078 ssh_runner.go:195] Run: systemctl --version
	I0216 17:38:26.436397  455078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 17:38:26.440753  455078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 17:38:26.440819  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:38:26.449648  455078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:38:26.458023  455078 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 17:38:26.458059  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.458090  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.458223  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.474175  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:38:26.484094  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:38:26.493935  455078 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:38:26.494002  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:38:26.503403  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.512684  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:38:26.521909  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:38:26.531787  455078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:38:26.540705  455078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:38:26.550084  455078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:38:26.558059  455078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:38:26.565815  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:26.641416  455078 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:38:26.728849  455078 start.go:475] detecting cgroup driver to use...
	I0216 17:38:26.728911  455078 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:38:26.728990  455078 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:38:26.742735  455078 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:38:26.742813  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:38:26.759375  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:38:26.799127  455078 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:38:26.803185  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:38:26.812600  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:38:26.833140  455078 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:38:26.932984  455078 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:38:27.033484  455078 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:38:27.033629  455078 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:38:27.051185  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:27.130916  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:27.399678  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:27.425421  455078 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:38:23.747120  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:25.747228  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:27.749489  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:28.141295  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:30.640768  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:27.452311  455078 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:38:27.452430  455078 cli_runner.go:164] Run: docker network inspect old-k8s-version-478853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:38:27.470021  455078 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 17:38:27.473738  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:27.498087  455078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:38:27.498175  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:27.517834  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:27.517864  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:27.517929  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:27.526852  455078 ssh_runner.go:195] Run: which lz4
	I0216 17:38:27.530297  455078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0216 17:38:27.533688  455078 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:38:27.533725  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 17:38:28.338789  455078 docker.go:649] Took 0.808536 seconds to copy over tarball
	I0216 17:38:28.338870  455078 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:38:30.411788  455078 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072893303s)
	I0216 17:38:30.411815  455078 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:38:30.479175  455078 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:38:30.487733  455078 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:38:30.505100  455078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:38:30.582595  455078 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:38:30.247143  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:32.747816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:33.141181  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:35.639892  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:33.116313  455078 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.533681626s)
	I0216 17:38:33.116382  455078 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:38:33.135813  455078 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:38:33.135845  455078 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:38:33.135858  455078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:38:33.137162  455078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.137160  455078 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.137223  455078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.137354  455078 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.137392  455078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.137429  455078 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:38:33.137443  455078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138311  455078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.138333  455078 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:38:33.138376  455078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.138385  455078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.138313  455078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.138400  455078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.138433  455078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.285042  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:38:33.303267  455078 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 17:38:33.303312  455078 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:38:33.303348  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:38:33.315066  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.321725  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 17:38:33.323279  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.334757  455078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 17:38:33.334805  455078 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.334852  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:38:33.343699  455078 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 17:38:33.343747  455078 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.343793  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:38:33.352683  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.354280  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:38:33.362703  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:38:33.371065  455078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 17:38:33.371116  455078 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.371157  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:38:33.375587  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.376027  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.388362  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.393888  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:38:33.398036  455078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 17:38:33.398083  455078 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398130  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:38:33.398631  455078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 17:38:33.398662  455078 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.398705  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:38:33.409280  455078 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 17:38:33.409328  455078 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.409390  455078 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:38:33.417932  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:38:33.419058  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:38:33.429478  455078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 17:38:33.927751  455078 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:38:33.946761  455078 cache_images.go:92] LoadImages completed in 810.887895ms
	W0216 17:38:33.946835  455078 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:38:33.946924  455078 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:38:33.998980  455078 cni.go:84] Creating CNI manager for ""
	I0216 17:38:33.999011  455078 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:38:33.999032  455078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:38:33.999057  455078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-478853 NodeName:old-k8s-version-478853 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:38:33.999219  455078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-478853"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-478853
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:38:33.999336  455078 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-478853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:38:33.999401  455078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:38:34.008330  455078 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:38:34.008396  455078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:38:34.017118  455078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:38:34.036229  455078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:38:34.052983  455078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:38:34.069858  455078 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:38:34.073399  455078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:38:34.084821  455078 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853 for IP: 192.168.76.2
	I0216 17:38:34.084858  455078 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.085003  455078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
	I0216 17:38:34.085065  455078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
	I0216 17:38:34.085164  455078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/client.key
	I0216 17:38:34.085237  455078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key.31bdca25
	I0216 17:38:34.085304  455078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key
	I0216 17:38:34.085439  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
	W0216 17:38:34.085482  455078 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
	I0216 17:38:34.085498  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
	I0216 17:38:34.085534  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
	I0216 17:38:34.085568  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:38:34.085605  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
	I0216 17:38:34.085675  455078 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
	I0216 17:38:34.086382  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:38:34.110629  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 17:38:34.134912  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:38:34.158975  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/old-k8s-version-478853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 17:38:34.182778  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:38:34.206586  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 17:38:34.230134  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:38:34.254430  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 17:38:34.277612  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
	I0216 17:38:34.300924  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
	I0216 17:38:34.323994  455078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:38:34.347005  455078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:38:34.363860  455078 ssh_runner.go:195] Run: openssl version
	I0216 17:38:34.369225  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:38:34.378947  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382670  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.382744  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:38:34.389395  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:38:34.398260  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
	I0216 17:38:34.407649  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411256  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.411309  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
	I0216 17:38:34.417851  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
	I0216 17:38:34.426535  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
	I0216 17:38:34.436025  455078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439431  455078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.439491  455078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
	I0216 17:38:34.445718  455078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:38:34.455048  455078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:38:34.458881  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 17:38:34.465622  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 17:38:34.472122  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 17:38:34.478657  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 17:38:34.485187  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 17:38:34.491630  455078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 17:38:34.498893  455078 kubeadm.go:404] StartCluster: {Name:old-k8s-version-478853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-478853 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:38:34.499126  455078 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:34.518382  455078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:38:34.527854  455078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 17:38:34.527878  455078 kubeadm.go:636] restartCluster start
	I0216 17:38:34.527928  455078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 17:38:34.536194  455078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:34.537015  455078 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-478853" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:38:34.537514  455078 kubeconfig.go:146] "old-k8s-version-478853" context is missing from /home/jenkins/minikube-integration/17936-6821/kubeconfig - will repair!
	I0216 17:38:34.538343  455078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:38:34.540022  455078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 17:38:34.548446  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:34.548492  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:34.558247  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.049347  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.049468  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.059915  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.549359  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:35.549453  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:35.559843  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:36.049307  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.049396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.059322  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:35.246221  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:37.246568  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:37.641066  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:40.140454  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:36.549105  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:36.549213  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:36.559873  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.049327  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.049438  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.060186  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:37.548692  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:37.548776  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:37.559318  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.048848  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.048932  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.059825  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:38.549312  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:38.549402  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:38.559567  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.049162  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.049259  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.060000  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.549306  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:39.549387  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:39.559839  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.049293  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.049368  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.059831  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:40.549417  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:40.549497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:40.559373  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:41.048862  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.048945  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.059288  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:39.247561  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:41.748493  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:42.140801  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:44.640033  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:41.549382  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:41.549484  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:41.559314  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.048976  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.049123  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.059008  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:42.548578  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:42.548667  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:42.558842  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.049308  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.049406  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.059857  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:43.549518  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:43.549600  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:43.559742  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.049320  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.049427  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.059859  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.548752  455078 api_server.go:166] Checking apiserver status ...
	I0216 17:38:44.548839  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:38:44.560016  455078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:38:44.560053  455078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 17:38:44.560062  455078 kubeadm.go:1135] stopping kube-system containers ...
	I0216 17:38:44.560127  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:38:44.578770  455078 docker.go:483] Stopping containers: [075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8]
	I0216 17:38:44.578834  455078 ssh_runner.go:195] Run: docker stop 075b0ec6a484 d2ce0b886430 928d392994b3 5e7370fcf7f8
	I0216 17:38:44.596955  455078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 17:38:44.609545  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:38:44.618238  455078 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 16 17:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 16 17:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 16 17:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 16 17:32 /etc/kubernetes/scheduler.conf
	
	I0216 17:38:44.618338  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 17:38:44.626677  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 17:38:44.634782  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 17:38:44.643301  455078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 17:38:44.651439  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659643  455078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 17:38:44.659668  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:44.715075  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.624969  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.844221  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:45.921661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:38:46.017075  455078 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:38:46.017183  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:44.246867  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:46.247223  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:47.140687  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:49.640734  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:46.517829  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.018038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:47.518055  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.018190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.517516  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.017903  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:49.517300  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.017289  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:50.517571  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:51.017570  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:48.247348  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:50.747444  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:52.750448  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:52.140329  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:54.641789  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:51.517363  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.017595  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:52.517311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.017396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:53.517392  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.017334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:54.517678  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.017257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.517766  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:56.018102  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:55.247481  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:57.747095  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:57.140707  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:59.640217  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:38:56.517703  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.017370  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:57.518275  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:58.517273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.017508  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.517232  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.017311  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:00.518159  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:01.017950  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:38:59.747967  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:02.246918  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:01.640535  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:04.140454  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:01.517978  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.017445  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:02.518044  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.017623  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:03.517519  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.018161  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.517338  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.018128  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:05.518224  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:06.017573  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:04.747285  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.748002  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.140588  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:08.640075  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:10.640834  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:06.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.017566  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:07.518227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.017309  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:08.517919  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.017261  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.517958  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.018104  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:10.517630  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:11.017722  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:09.246644  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:11.247325  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:13.140690  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:15.639645  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:11.517385  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:12.518218  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.017548  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.517305  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.017745  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:14.517334  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.018048  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:15.517744  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:16.018296  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:13.747391  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:15.747767  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:17.747895  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:17.640336  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:19.641039  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:16.517970  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.017324  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:17.517497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.017541  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:18.517634  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.017283  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.518252  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.018182  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:20.517728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:21.017730  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:19.749099  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:22.247204  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:22.140431  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:24.140986  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:21.517816  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.017751  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:22.517782  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.018273  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:23.517621  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.017984  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.517954  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.018276  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:25.517286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:26.017373  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:24.747551  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.747774  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.639947  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:28.640616  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:30.640740  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:26.517418  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.017640  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:27.517287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.017677  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:28.517756  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.017227  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.517587  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.017969  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:30.518374  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:31.017306  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:29.246627  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:31.747429  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:33.140469  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:35.640295  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:31.517715  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.017728  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:32.517510  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.018287  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.517848  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.018088  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:34.518190  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.017886  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:35.517921  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:36.017601  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:33.748340  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:36.246559  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:38.141091  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:40.642937  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:36.517708  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.017256  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:37.518107  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.018257  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.517396  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.018308  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:39.517977  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.017391  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:40.517676  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:41.018082  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:38.741460  421205 pod_ready.go:81] duration metric: took 4m0.000603771s waiting for pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace to be "Ready" ...
	E0216 17:39:38.741515  421205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-b4tfl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0216 17:39:38.741533  421205 pod_ready.go:38] duration metric: took 4m12.045748032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:39:38.741559  421205 kubeadm.go:640] restartCluster took 4m28.365798554s
	W0216 17:39:38.741619  421205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0216 17:39:38.741647  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0216 17:39:43.140804  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:45.640700  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:45.437451  421205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.695785181s)
	I0216 17:39:45.437509  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:39:45.449061  421205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:39:45.457885  421205 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:39:45.457936  421205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:39:45.466012  421205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:39:45.466056  421205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:39:45.508738  421205 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 17:39:45.508791  421205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:39:45.558205  421205 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:39:45.558302  421205 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:39:45.558347  421205 kubeadm.go:322] OS: Linux
	I0216 17:39:45.558428  421205 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:39:45.558485  421205 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:39:45.558553  421205 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:39:45.558668  421205 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:39:45.558732  421205 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:39:45.558772  421205 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:39:45.558807  421205 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0216 17:39:45.558847  421205 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0216 17:39:45.558884  421205 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0216 17:39:45.627418  421205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:39:45.627548  421205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:39:45.627688  421205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:39:45.912474  421205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:39:41.517622  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.018155  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:42.517827  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.017315  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:43.518231  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.017682  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:44.518286  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.017388  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:45.517539  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:46.017624  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:46.037272  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.037295  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:46.037341  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:46.055115  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.055155  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:46.055211  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:46.072423  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.072450  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:46.072507  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:46.090301  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.090332  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:46.090378  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:46.107880  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.107903  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:46.107956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:46.125772  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.125798  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:46.125854  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:46.144677  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.144701  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:46.144756  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:46.162329  455078 logs.go:276] 0 containers: []
	W0216 17:39:46.162352  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:46.162364  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:46.162380  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:46.185113  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.090711    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.185260  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:24 old-k8s-version-478853 kubelet[1655]: E0216 17:39:24.091853    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.187251  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:25 old-k8s-version-478853 kubelet[1655]: E0216 17:39:25.090502    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.194562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.207697  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.211063  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.219621  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.220204  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.231233  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:46.231271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:46.254556  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:46.254587  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:39:45.914573  421205 out.go:204]   - Generating certificates and keys ...
	I0216 17:39:45.914675  421205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:39:45.914799  421205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:39:45.914914  421205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:39:45.915001  421205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:39:45.915089  421205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:39:45.915541  421205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:39:45.916033  421205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:39:45.916419  421205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:39:45.916848  421205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:39:45.917282  421205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:39:45.917754  421205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:39:45.917840  421205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:39:46.148582  421205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:39:46.292877  421205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:39:46.367973  421205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:39:46.626595  421205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:39:46.627016  421205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:39:46.629773  421205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:39:46.631711  421205 out.go:204]   - Booting up control plane ...
	I0216 17:39:46.631800  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:39:46.631863  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:39:46.632578  421205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:39:46.646321  421205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:39:46.647004  421205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:39:46.647046  421205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 17:39:46.742531  421205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:39:48.140674  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:50.141346  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	W0216 17:39:46.318337  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:46.318446  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:46.318467  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:46.335929  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:46.335962  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:46.372855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.372884  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:46.372951  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:39:46.372966  455078 out.go:239]   Feb 16 17:39:29 old-k8s-version-478853 kubelet[1655]: E0216 17:39:29.089933    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:46.372982  455078 out.go:239]   Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:46.372999  455078 out.go:239]   Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:46.373011  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:46.373032  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:39:46.373043  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:46.373054  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:39:52.244564  421205 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502426 seconds
	I0216 17:39:52.244744  421205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 17:39:52.257745  421205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 17:39:52.780917  421205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 17:39:52.781167  421205 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-816748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 17:39:53.290300  421205 kubeadm.go:322] [bootstrap-token] Using token: b545ud.qoxywc1rux2naq15
	I0216 17:39:53.291755  421205 out.go:204]   - Configuring RBAC rules ...
	I0216 17:39:53.291900  421205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 17:39:53.296340  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 17:39:53.305516  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 17:39:53.308824  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 17:39:53.311990  421205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 17:39:53.315096  421205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 17:39:53.326643  421205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 17:39:53.516995  421205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 17:39:53.702313  421205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 17:39:53.703508  421205 kubeadm.go:322] 
	I0216 17:39:53.703621  421205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 17:39:53.703643  421205 kubeadm.go:322] 
	I0216 17:39:53.703738  421205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 17:39:53.703749  421205 kubeadm.go:322] 
	I0216 17:39:53.703791  421205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 17:39:53.703859  421205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 17:39:53.703917  421205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 17:39:53.703923  421205 kubeadm.go:322] 
	I0216 17:39:53.703990  421205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 17:39:53.703997  421205 kubeadm.go:322] 
	I0216 17:39:53.704048  421205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 17:39:53.704054  421205 kubeadm.go:322] 
	I0216 17:39:53.704115  421205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 17:39:53.704243  421205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 17:39:53.704316  421205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 17:39:53.704324  421205 kubeadm.go:322] 
	I0216 17:39:53.704429  421205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 17:39:53.704536  421205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 17:39:53.704543  421205 kubeadm.go:322] 
	I0216 17:39:53.704641  421205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token b545ud.qoxywc1rux2naq15 \
	I0216 17:39:53.704736  421205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c33b1f5c4481e3865d2c10e6d2d19afe2a2ea581c4fb2eeaf81b4cbf188a97ed \
	I0216 17:39:53.704769  421205 kubeadm.go:322] 	--control-plane 
	I0216 17:39:53.704776  421205 kubeadm.go:322] 
	I0216 17:39:53.704878  421205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 17:39:53.704885  421205 kubeadm.go:322] 
	I0216 17:39:53.704982  421205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token b545ud.qoxywc1rux2naq15 \
	I0216 17:39:53.705100  421205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c33b1f5c4481e3865d2c10e6d2d19afe2a2ea581c4fb2eeaf81b4cbf188a97ed 
	I0216 17:39:53.708918  421205 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:39:53.709126  421205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:39:53.709148  421205 cni.go:84] Creating CNI manager for ""
	I0216 17:39:53.709168  421205 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:39:53.711998  421205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 17:39:52.640913  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:55.140750  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:53.714013  421205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 17:39:53.727031  421205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 17:39:53.811313  421205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 17:39:53.811367  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:53.811413  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=default-k8s-diff-port-816748 minikube.k8s.io/updated_at=2024_02_16T17_39_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:54.021083  421205 ops.go:34] apiserver oom_adj: -16
	I0216 17:39:54.021156  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:54.521783  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:55.022023  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:55.521421  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:56.021555  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:56.521524  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.021852  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.521744  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:57.640415  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:00.139644  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:39:56.373478  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:39:56.383879  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:39:56.401408  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.401433  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:39:56.401477  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:39:56.418690  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.418712  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:39:56.418759  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:39:56.436337  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.436362  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:39:56.436415  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:39:56.455521  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.455553  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:39:56.455602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:39:56.473949  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.473981  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:39:56.474028  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:39:56.491473  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.491495  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:39:56.491541  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:39:56.509845  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.509869  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:39:56.509955  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:39:56.528197  455078 logs.go:276] 0 containers: []
	W0216 17:39:56.528222  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:39:56.528231  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:39:56.528242  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:39:56.549520  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:35 old-k8s-version-478853 kubelet[1655]: E0216 17:39:35.089853    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.551570  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:36 old-k8s-version-478853 kubelet[1655]: E0216 17:39:36.089923    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.558562  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.091723    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:39:56.559087  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.571119  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.571305  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.579133  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.586015  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.590770  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:39:56.590803  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:39:56.615066  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:39:56.615101  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:39:56.678064  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:39:56.678096  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:39:56.678114  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:39:56.695201  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:39:56.695238  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:39:56.736311  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736338  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:39:56.736412  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:39:56.736433  455078 out.go:239]   Feb 16 17:39:40 old-k8s-version-478853 kubelet[1655]: E0216 17:39:40.092909    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736451  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:39:56.736465  455078 out.go:239]   Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:39:56.736474  455078 out.go:239]   Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:39:56.736483  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:39:56.736496  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:39:56.736508  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:39:58.021227  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:58.521363  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:59.021155  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:39:59.521559  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:00.021409  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:00.521925  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:01.022133  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:01.522131  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.021930  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.521763  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:02.140368  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:04.639630  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:03.022096  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:03.521373  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:04.021412  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:04.521179  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:05.021348  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:05.521512  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.021569  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.521578  421205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 17:40:06.612467  421205 kubeadm.go:1088] duration metric: took 12.801150825s to wait for elevateKubeSystemPrivileges.
	I0216 17:40:06.612503  421205 kubeadm.go:406] StartCluster complete in 4m56.263224158s
	I0216 17:40:06.612526  421205 settings.go:142] acquiring lock: {Name:mkc0445e63ab2bfc5d2d7306f3af19ca96df275c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:40:06.612605  421205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 17:40:06.614600  421205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/kubeconfig: {Name:mkdc2ed683d72ff0e162ea619463de7edb9c0858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:40:06.616255  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 17:40:06.616305  421205 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 17:40:06.616387  421205 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616409  421205 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616417  421205 addons.go:243] addon storage-provisioner should already be in state true
	I0216 17:40:06.616458  421205 config.go:182] Loaded profile config "default-k8s-diff-port-816748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:40:06.616470  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.616511  421205 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616527  421205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-816748"
	I0216 17:40:06.616614  421205 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616633  421205 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616642  421205 addons.go:243] addon metrics-server should already be in state true
	I0216 17:40:06.616678  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.616835  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.616951  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.616959  421205 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-816748"
	I0216 17:40:06.616973  421205 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-816748"
	W0216 17:40:06.616980  421205 addons.go:243] addon dashboard should already be in state true
	I0216 17:40:06.617018  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.617107  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.617436  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.648433  421205 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 17:40:06.650072  421205 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 17:40:06.652286  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 17:40:06.652308  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 17:40:06.653725  421205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0216 17:40:06.652367  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.654228  421205 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-816748"
	I0216 17:40:06.655391  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 17:40:06.656747  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 17:40:06.656777  421205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:40:06.656793  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	W0216 17:40:06.656808  421205 addons.go:243] addon default-storageclass should already be in state true
	I0216 17:40:06.658385  421205 host.go:66] Checking if "default-k8s-diff-port-816748" exists ...
	I0216 17:40:06.658341  421205 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:40:06.658473  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 17:40:06.658518  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.658765  421205 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-816748 --format={{.State.Status}}
	I0216 17:40:06.674555  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.677611  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.679326  421205 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 17:40:06.679343  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 17:40:06.679382  421205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-816748
	I0216 17:40:06.681559  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.703643  421205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/default-k8s-diff-port-816748/id_rsa Username:docker}
	I0216 17:40:06.913413  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 17:40:06.915276  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 17:40:06.915298  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 17:40:06.922563  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 17:40:06.926729  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 17:40:06.926756  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 17:40:06.995331  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:40:07.005872  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 17:40:07.005905  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 17:40:07.103003  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 17:40:07.103037  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 17:40:07.110492  421205 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 17:40:07.110518  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 17:40:07.120377  421205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-816748" context rescaled to 1 replicas
	I0216 17:40:07.120485  421205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:40:07.122904  421205 out.go:177] * Verifying Kubernetes components...
	I0216 17:40:07.124464  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:40:07.213518  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 17:40:07.213549  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 17:40:07.295281  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 17:40:07.409983  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 17:40:07.410082  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 17:40:07.599285  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 17:40:07.599372  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 17:40:07.706049  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 17:40:07.706088  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 17:40:07.794066  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 17:40:07.794105  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 17:40:07.822000  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 17:40:07.822081  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 17:40:07.911598  421205 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 17:40:07.911625  421205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 17:40:07.992925  421205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 17:40:08.711726  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.798252087s)
	I0216 17:40:08.994620  421205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.072013054s)
	I0216 17:40:08.994687  421205 start.go:929] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0216 17:40:09.404133  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.408758523s)
	I0216 17:40:09.404258  421205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.279733497s)
	I0216 17:40:09.404326  421205 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-816748" to be "Ready" ...
	I0216 17:40:09.410294  421205 node_ready.go:49] node "default-k8s-diff-port-816748" has status "Ready":"True"
	I0216 17:40:09.410317  421205 node_ready.go:38] duration metric: took 5.951342ms waiting for node "default-k8s-diff-port-816748" to be "Ready" ...
	I0216 17:40:09.410329  421205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:40:09.416584  421205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:09.531400  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.236063439s)
	I0216 17:40:09.531444  421205 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-816748"
	I0216 17:40:10.207461  421205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.21448694s)
	I0216 17:40:10.208862  421205 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-816748 addons enable metrics-server
	
	I0216 17:40:10.210493  421205 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0216 17:40:06.646176  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:09.140721  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:06.738101  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:06.750726  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:06.772968  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.772995  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:06.773046  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:06.791480  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.791505  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:06.791551  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:06.815979  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.816012  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:06.816068  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:06.842123  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.842147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:06.842203  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:06.860609  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.860654  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:06.860709  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:06.879119  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.879147  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:06.879191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:06.898150  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.898182  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:06.898242  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:06.924427  455078 logs.go:276] 0 containers: []
	W0216 17:40:06.924445  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:06.924454  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:06.924465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:06.953125  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.091007    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.953295  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:47 old-k8s-version-478853 kubelet[1655]: E0216 17:39:47.093108    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.960436  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:51 old-k8s-version-478853 kubelet[1655]: E0216 17:39:51.089869    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.965576  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:06.972709  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:06.974757  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:06.985103  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:06.985250  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:06.988009  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:06.988029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:07.022943  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:07.023046  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:07.085083  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:07.085110  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:07.085127  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:07.106416  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:07.106465  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:07.152094  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152117  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:07.152199  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:07.152209  455078 out.go:239]   Feb 16 17:39:54 old-k8s-version-478853 kubelet[1655]: E0216 17:39:54.091371    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:07.152220  455078 out.go:239]   Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:07.152227  455078 out.go:239]   Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:07.152233  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:07.152240  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:07.152247  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:07.152255  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:10.212229  421205 addons.go:505] enable addons completed in 3.595922671s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0216 17:40:10.423944  421205 pod_ready.go:92] pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.423988  421205 pod_ready.go:81] duration metric: took 1.007376782s waiting for pod "coredns-5dd5756b68-6dd5s" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.424003  421205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.429495  421205 pod_ready.go:92] pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.429524  421205 pod_ready.go:81] duration metric: took 5.513071ms waiting for pod "etcd-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.429537  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.497606  421205 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.497644  421205 pod_ready.go:81] duration metric: took 68.098616ms waiting for pod "kube-apiserver-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.497660  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.503258  421205 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.503280  421205 pod_ready.go:81] duration metric: took 5.611297ms waiting for pod "kube-controller-manager-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.503290  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7czt" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.607945  421205 pod_ready.go:92] pod "kube-proxy-f7czt" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:10.607971  421205 pod_ready.go:81] duration metric: took 104.674051ms waiting for pod "kube-proxy-f7czt" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:10.607986  421205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.008078  421205 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace has status "Ready":"True"
	I0216 17:40:11.008126  421205 pod_ready.go:81] duration metric: took 400.130876ms waiting for pod "kube-scheduler-default-k8s-diff-port-816748" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.008144  421205 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace to be "Ready" ...
	I0216 17:40:11.141383  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:13.640883  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:13.014986  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:15.514133  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:17.515916  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:16.140859  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:18.141101  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:20.640092  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:17.154126  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:17.166732  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:17.188369  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.188397  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:17.188456  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:17.208931  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.208958  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:17.209015  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:17.231036  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.231064  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:17.231117  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:17.251517  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.251544  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:17.251609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:17.273246  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.273278  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:17.273329  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:17.294078  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.294106  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:17.294162  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:17.315685  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.315708  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:17.315752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:17.339445  455078 logs.go:276] 0 containers: []
	W0216 17:40:17.339468  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:17.339477  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:17.339488  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:17.373320  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:17.373357  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:17.450406  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:17.450427  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:17.450442  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:17.470514  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:17.470553  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:17.518001  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:17.518029  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:17.548549  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:58 old-k8s-version-478853 kubelet[1655]: E0216 17:39:58.091584    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.551801  455078 logs.go:138] Found kubelet problem: Feb 16 17:39:59 old-k8s-version-478853 kubelet[1655]: E0216 17:39:59.090282    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.566478  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.566729  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.584759  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.587832  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.593226  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595733  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595755  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:17.595804  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:17.595815  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:17.595822  455078 out.go:239]   Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:17.595829  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:17.595838  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:17.595847  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:17.595855  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:17.595860  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:20.014588  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:22.014673  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:22.640353  388513 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:23.139864  388513 pod_ready.go:81] duration metric: took 4m0.005711416s waiting for pod "metrics-server-57f55c9bc5-mwshp" in "kube-system" namespace to be "Ready" ...
	E0216 17:40:23.139887  388513 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0216 17:40:23.139894  388513 pod_ready.go:38] duration metric: took 4m1.197458921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:40:23.139912  388513 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:40:23.139973  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:23.157852  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:23.157924  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:23.178684  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:23.178767  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:23.196651  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:23.196736  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:23.214872  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:23.214936  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:23.232995  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:23.233093  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:23.251975  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:23.252067  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:23.269953  388513 logs.go:276] 0 containers: []
	W0216 17:40:23.269984  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:23.270043  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:23.287999  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:23.288072  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:23.307186  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:23.307243  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:23.307259  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:23.327277  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:23.327304  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:23.353566  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:23.353607  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:23.410553  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:23.410616  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:23.497408  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:23.497446  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:23.592826  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:23.592857  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:23.626632  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:23.626668  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:23.652222  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:23.652256  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:23.672102  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:23.672131  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:23.693163  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:23.693190  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:23.746041  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:23.746081  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:23.772653  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:23.772690  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:23.795423  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:23.795457  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:24.513521  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:26.515124  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:26.339041  388513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:26.351529  388513 api_server.go:72] duration metric: took 4m7.137437385s to wait for apiserver process to appear ...
	I0216 17:40:26.351556  388513 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:40:26.351633  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:26.369719  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:26.369790  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:26.389630  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:26.389709  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:26.408167  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:26.408256  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:26.425906  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:26.425984  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:26.444534  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:26.444648  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:26.465664  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:26.465740  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:26.483212  388513 logs.go:276] 0 containers: []
	W0216 17:40:26.483244  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:26.483305  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:26.502035  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:26.502118  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:26.522106  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:26.522147  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:26.522158  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:26.610242  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:26.610280  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:26.634292  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:26.634335  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:26.687413  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:26.687450  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:26.709327  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:26.709357  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:26.734394  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:26.734431  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:26.802087  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:26.802122  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:26.901820  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:26.901854  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:26.928476  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:26.928505  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:26.949968  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:26.949998  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:26.990305  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:26.990335  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:27.015341  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:27.015376  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:27.045881  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:27.045914  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:29.572745  388513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0216 17:40:29.577898  388513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0216 17:40:29.578920  388513 api_server.go:141] control plane version: v1.28.4
	I0216 17:40:29.578940  388513 api_server.go:131] duration metric: took 3.227378488s to wait for apiserver health ...
	I0216 17:40:29.578948  388513 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:40:29.579008  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:29.598562  388513 logs.go:276] 1 containers: [ee128c09c2d6]
	I0216 17:40:29.598650  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:29.617159  388513 logs.go:276] 1 containers: [6ddccc19fa99]
	I0216 17:40:29.617231  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:29.635295  388513 logs.go:276] 1 containers: [403deca60e52]
	I0216 17:40:29.635357  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:29.653771  388513 logs.go:276] 1 containers: [c5d843a77086]
	I0216 17:40:29.653859  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:29.671979  388513 logs.go:276] 1 containers: [cda0e6c36571]
	I0216 17:40:29.672047  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:29.690510  388513 logs.go:276] 1 containers: [f11e3bd1e9f2]
	I0216 17:40:29.690594  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:29.709610  388513 logs.go:276] 0 containers: []
	W0216 17:40:29.709634  388513 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:29.709689  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:29.731047  388513 logs.go:276] 1 containers: [9d42bc551893]
	I0216 17:40:29.731144  388513 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:40:29.754845  388513 logs.go:276] 1 containers: [e4861933e8ab]
	I0216 17:40:29.754902  388513 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:29.754917  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:40:29.843952  388513 logs.go:123] Gathering logs for coredns [403deca60e52] ...
	I0216 17:40:29.843989  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 403deca60e52"
	I0216 17:40:29.864802  388513 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:29.864828  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:29.920686  388513 logs.go:123] Gathering logs for kube-apiserver [ee128c09c2d6] ...
	I0216 17:40:29.920724  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee128c09c2d6"
	I0216 17:40:29.951656  388513 logs.go:123] Gathering logs for kube-scheduler [c5d843a77086] ...
	I0216 17:40:29.951695  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d843a77086"
	I0216 17:40:29.978677  388513 logs.go:123] Gathering logs for kube-proxy [cda0e6c36571] ...
	I0216 17:40:29.978715  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cda0e6c36571"
	I0216 17:40:30.001402  388513 logs.go:123] Gathering logs for container status ...
	I0216 17:40:30.001434  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:30.061025  388513 logs.go:123] Gathering logs for etcd [6ddccc19fa99] ...
	I0216 17:40:30.061057  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddccc19fa99"
	I0216 17:40:30.088081  388513 logs.go:123] Gathering logs for kube-controller-manager [f11e3bd1e9f2] ...
	I0216 17:40:30.088120  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11e3bd1e9f2"
	I0216 17:40:30.130971  388513 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:30.131005  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:30.154482  388513 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:30.154518  388513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:40:30.249872  388513 logs.go:123] Gathering logs for kubernetes-dashboard [9d42bc551893] ...
	I0216 17:40:30.249907  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d42bc551893"
	I0216 17:40:30.271318  388513 logs.go:123] Gathering logs for storage-provisioner [e4861933e8ab] ...
	I0216 17:40:30.271347  388513 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4861933e8ab"
	I0216 17:40:27.597408  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:27.608054  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:27.625950  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.625980  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:27.626038  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:27.643801  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.643825  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:27.643880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:27.661848  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.661878  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:27.661942  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:27.680910  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.680935  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:27.680984  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:27.698550  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.698575  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:27.698619  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:27.716355  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.716386  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:27.716449  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:27.739573  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.739621  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:27.739686  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:27.760360  455078 logs.go:276] 0 containers: []
	W0216 17:40:27.760383  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:27.760395  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:27.760426  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:27.779114  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:27.779170  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:27.818659  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:27.818687  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:27.841156  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.094475    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.841308  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:05 old-k8s-version-478853 kubelet[1655]: E0216 17:40:05.095602    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.853903  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.855874  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.859522  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.864706  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.874176  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.879404  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:27.879429  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:27.903542  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:27.903580  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:27.964966  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:27.964993  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965008  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:27.965060  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:27.965077  455078 out.go:239]   Feb 16 17:40:13 old-k8s-version-478853 kubelet[1655]: E0216 17:40:13.090153    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:27.965133  455078 out.go:239]   Feb 16 17:40:14 old-k8s-version-478853 kubelet[1655]: E0216 17:40:14.095987    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:27.965146  455078 out.go:239]   Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:27.965155  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:27.965165  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:40:27.965175  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:27.965182  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:32.798008  388513 system_pods.go:59] 8 kube-system pods found
	I0216 17:40:32.798035  388513 system_pods.go:61] "coredns-5dd5756b68-qxbsw" [86635938-da74-4ed1-84bc-86c0fe6f2702] Running
	I0216 17:40:32.798040  388513 system_pods.go:61] "etcd-embed-certs-162802" [4ceabd92-09a4-457e-a4df-978436c3a95b] Running
	I0216 17:40:32.798046  388513 system_pods.go:61] "kube-apiserver-embed-certs-162802" [3eed31be-48b2-40c6-95b2-b468485f7b32] Running
	I0216 17:40:32.798051  388513 system_pods.go:61] "kube-controller-manager-embed-certs-162802" [35a7a353-daa8-45d5-9a40-a7b9715036e5] Running
	I0216 17:40:32.798055  388513 system_pods.go:61] "kube-proxy-7w7fm" [a11a21da-10f2-49b5-8b5c-c7b201db94f6] Running
	I0216 17:40:32.798059  388513 system_pods.go:61] "kube-scheduler-embed-certs-162802" [6aab76ff-2e1f-41d5-b007-0daeb8d2da79] Running
	I0216 17:40:32.798065  388513 system_pods.go:61] "metrics-server-57f55c9bc5-mwshp" [fb2ed14c-f295-431c-8223-cd10088ca15a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:40:32.798069  388513 system_pods.go:61] "storage-provisioner" [8b68bcc2-40d3-4b43-855f-5787f2bb54e7] Running
	I0216 17:40:32.798076  388513 system_pods.go:74] duration metric: took 3.219122938s to wait for pod list to return data ...
	I0216 17:40:32.798083  388513 default_sa.go:34] waiting for default service account to be created ...
	I0216 17:40:32.800438  388513 default_sa.go:45] found service account: "default"
	I0216 17:40:32.800461  388513 default_sa.go:55] duration metric: took 2.372693ms for default service account to be created ...
	I0216 17:40:32.800470  388513 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 17:40:32.805286  388513 system_pods.go:86] 8 kube-system pods found
	I0216 17:40:32.805310  388513 system_pods.go:89] "coredns-5dd5756b68-qxbsw" [86635938-da74-4ed1-84bc-86c0fe6f2702] Running
	I0216 17:40:32.805316  388513 system_pods.go:89] "etcd-embed-certs-162802" [4ceabd92-09a4-457e-a4df-978436c3a95b] Running
	I0216 17:40:32.805320  388513 system_pods.go:89] "kube-apiserver-embed-certs-162802" [3eed31be-48b2-40c6-95b2-b468485f7b32] Running
	I0216 17:40:32.805328  388513 system_pods.go:89] "kube-controller-manager-embed-certs-162802" [35a7a353-daa8-45d5-9a40-a7b9715036e5] Running
	I0216 17:40:32.805336  388513 system_pods.go:89] "kube-proxy-7w7fm" [a11a21da-10f2-49b5-8b5c-c7b201db94f6] Running
	I0216 17:40:32.805342  388513 system_pods.go:89] "kube-scheduler-embed-certs-162802" [6aab76ff-2e1f-41d5-b007-0daeb8d2da79] Running
	I0216 17:40:32.805352  388513 system_pods.go:89] "metrics-server-57f55c9bc5-mwshp" [fb2ed14c-f295-431c-8223-cd10088ca15a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:40:32.805389  388513 system_pods.go:89] "storage-provisioner" [8b68bcc2-40d3-4b43-855f-5787f2bb54e7] Running
	I0216 17:40:32.805397  388513 system_pods.go:126] duration metric: took 4.922741ms to wait for k8s-apps to be running ...
	I0216 17:40:32.805407  388513 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 17:40:32.805452  388513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:40:32.817981  388513 system_svc.go:56] duration metric: took 12.566654ms WaitForService to wait for kubelet.
	I0216 17:40:32.818013  388513 kubeadm.go:581] duration metric: took 4m13.603925134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 17:40:32.818038  388513 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:40:32.820989  388513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:40:32.821015  388513 node_conditions.go:123] node cpu capacity is 8
	I0216 17:40:32.821027  388513 node_conditions.go:105] duration metric: took 2.983734ms to run NodePressure ...
	I0216 17:40:32.821039  388513 start.go:228] waiting for startup goroutines ...
	I0216 17:40:32.821047  388513 start.go:233] waiting for cluster config update ...
	I0216 17:40:32.821063  388513 start.go:242] writing updated cluster config ...
	I0216 17:40:32.821410  388513 ssh_runner.go:195] Run: rm -f paused
	I0216 17:40:32.870101  388513 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 17:40:32.872413  388513 out.go:177] * Done! kubectl is now configured to use "embed-certs-162802" cluster and "default" namespace by default
	I0216 17:40:29.013422  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:31.013924  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:33.014487  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:35.515361  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:37.966560  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:37.977313  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:37.994775  455078 logs.go:276] 0 containers: []
	W0216 17:40:37.994798  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:37.994844  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:38.012932  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.012960  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:38.013014  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:38.033792  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.033820  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:38.033880  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:38.052523  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.052549  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:38.052610  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:38.072650  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.072705  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:38.072765  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:38.092189  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.092223  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:38.092296  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:38.110333  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.110359  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:38.110404  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:38.128992  455078 logs.go:276] 0 containers: []
	W0216 17:40:38.129027  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:38.129037  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:38.129048  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:38.167101  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:38.167135  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:38.186657  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:16 old-k8s-version-478853 kubelet[1655]: E0216 17:40:16.089820    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.191871  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.201457  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.207565  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.209614  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.217808  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.224819  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:38.224859  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:38.248754  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:38.248833  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:38.311199  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:38.311223  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:38.311236  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:38.327036  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327063  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:38.327121  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:38.327132  455078 out.go:239]   Feb 16 17:40:19 old-k8s-version-478853 kubelet[1655]: E0216 17:40:19.089977    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:38.327140  455078 out.go:239]   Feb 16 17:40:25 old-k8s-version-478853 kubelet[1655]: E0216 17:40:25.090405    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:38.327148  455078 out.go:239]   Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:38.327154  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:38.327160  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:40:38.327169  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:38.327174  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:38.014130  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:40.014835  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:42.015514  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:44.514285  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:47.015104  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:48.327861  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:48.339194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:48.360648  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.360673  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:48.360728  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:48.378486  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.378513  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:48.378557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:48.398639  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.398666  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:48.398712  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:48.417793  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.417817  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:48.417873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:48.435529  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.435552  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:48.435602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:48.457049  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.457082  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:48.457155  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:48.477801  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.477826  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:48.477868  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:48.496234  455078 logs.go:276] 0 containers: []
	W0216 17:40:48.496257  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:48.496265  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:48.496278  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:48.538184  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:48.538212  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:48.564633  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:28 old-k8s-version-478853 kubelet[1655]: E0216 17:40:28.089742    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.566786  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.576446  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.585675  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.585865  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.588023  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.601821  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:48.601858  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:48.626705  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:48.626746  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:48.803956  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:48.803984  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:48.803997  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:48.820684  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820710  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:48.820755  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:48.820765  455078 out.go:239]   Feb 16 17:40:29 old-k8s-version-478853 kubelet[1655]: E0216 17:40:29.089619    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:48.820790  455078 out.go:239]   Feb 16 17:40:34 old-k8s-version-478853 kubelet[1655]: E0216 17:40:34.090495    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:48.820799  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:48.820807  455078 out.go:239]   Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:48.820814  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:40:48.820820  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:48.820826  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:49.514565  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:52.014075  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:54.515655  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:57.014540  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:40:58.821518  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:40:58.832683  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:40:58.850170  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.850200  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:40:58.850256  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:40:58.868305  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.868327  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:40:58.868367  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:40:58.887531  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.887556  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:40:58.887602  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:40:58.905145  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.905176  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:40:58.905229  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:40:58.923499  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.923530  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:40:58.923587  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:40:58.941547  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.941581  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:40:58.941629  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:40:58.959233  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.959258  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:40:58.959309  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:40:58.977281  455078 logs.go:276] 0 containers: []
	W0216 17:40:58.977302  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:40:58.977313  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:40:58.977323  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:40:59.015956  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:40:59.015983  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:40:59.040126  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.090401    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.040302  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:39 old-k8s-version-478853 kubelet[1655]: E0216 17:40:39.091492    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:40:59.042282  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.056437  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.062909  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.065045  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.065415  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.073540  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:40:59.073574  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:40:59.097435  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:40:59.097482  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:40:59.159801  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:40:59.159827  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:40:59.159839  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:40:59.176592  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176621  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:40:59.176676  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:40:59.176684  455078 out.go:239]   Feb 16 17:40:40 old-k8s-version-478853 kubelet[1655]: E0216 17:40:40.089804    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176693  455078 out.go:239]   Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:40:59.176709  455078 out.go:239]   Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:40:59.176718  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:40:59.176728  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:40:59.176735  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:40:59.176740  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:40:59.514085  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:01.514563  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:04.014812  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:06.514468  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:09.178430  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:09.189176  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:09.207320  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.207345  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:09.207400  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:09.225002  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.225033  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:09.225096  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:09.243928  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.243959  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:09.244013  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:09.262481  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.262505  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:09.262559  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:09.279969  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.279992  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:09.280049  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:09.297754  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.297777  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:09.297825  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:09.315771  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.315800  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:09.315852  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:09.333460  455078 logs.go:276] 0 containers: []
	W0216 17:41:09.333491  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:09.333500  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:09.333511  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:09.355521  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:49 old-k8s-version-478853 kubelet[1655]: E0216 17:40:49.090439    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.362102  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:53 old-k8s-version-478853 kubelet[1655]: E0216 17:40:53.089863    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.364251  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.090754    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.364640  455078 logs.go:138] Found kubelet problem: Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.381046  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.388010  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.388320  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.390233  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.392031  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:09.392060  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:09.417243  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:09.417287  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:09.478675  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:09.478700  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:09.478711  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:09.495170  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:09.495201  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:09.534342  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534369  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:09.534418  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:09.534429  455078 out.go:239]   Feb 16 17:40:54 old-k8s-version-478853 kubelet[1655]: E0216 17:40:54.091869    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:09.534440  455078 out.go:239]   Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:09.534451  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:09.534457  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:09.534469  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:41:09.534474  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:09.534482  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:09.014641  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:11.513750  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:13.513808  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:16.014153  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:19.535038  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:19.545504  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:19.563494  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.563519  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:19.563579  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:19.581616  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.581645  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:19.581692  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:19.599875  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.599906  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:19.599956  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:19.618224  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.618251  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:19.618310  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:19.637362  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.637392  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:19.637442  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:19.655724  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.655755  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:19.655800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:19.672560  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.672588  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:19.672636  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:19.690212  455078 logs.go:276] 0 containers: []
	W0216 17:41:19.690239  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:19.690251  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:19.690265  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:19.719464  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:03 old-k8s-version-478853 kubelet[1655]: E0216 17:41:03.090912    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.726630  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.726900  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.728877  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.741983  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.745889  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.748644  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:19.748681  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:19.774437  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:19.774473  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:19.836688  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:19.836707  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:19.836719  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:19.852476  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:19.852506  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:19.889446  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889484  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:19.889541  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:19.889559  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:19.889574  455078 out.go:239]   Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:19.889591  455078 out.go:239]   Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:19.889607  455078 out.go:239]   Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:19.889625  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:19.889639  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:19.889653  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:18.514473  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:21.014774  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:23.513640  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:25.514552  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:27.514673  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:29.891027  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:29.901935  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:29.919667  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.919697  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:29.919757  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:29.937792  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.937823  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:29.937873  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:29.955488  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.955513  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:29.955557  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:29.973119  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.973147  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:29.973194  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:29.991607  455078 logs.go:276] 0 containers: []
	W0216 17:41:29.991634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:29.991681  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:30.010229  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.010258  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:30.010330  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:30.029419  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.029446  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:30.029496  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:30.047844  455078 logs.go:276] 0 containers: []
	W0216 17:41:30.047870  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:30.047882  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:30.047900  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:30.108010  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:30.108031  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:30.108042  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:30.124087  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:30.124121  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:30.161506  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:30.161532  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:30.182528  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.090189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.182822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:07 old-k8s-version-478853 kubelet[1655]: E0216 17:41:07.091301    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.184822  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:08 old-k8s-version-478853 kubelet[1655]: E0216 17:41:08.089109    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.197489  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:16 old-k8s-version-478853 kubelet[1655]: E0216 17:41:16.092141    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.201168  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.204811  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.208217  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.216614  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.220063  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.221825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:30.221850  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:30.245800  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245840  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:30.245897  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:30.245910  455078 out.go:239]   Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:30.245938  455078 out.go:239]   Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:30.245947  455078 out.go:239]   Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:30.245955  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:30.245969  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:41:30.245977  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:30.245986  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:30.013845  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:32.014461  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:34.513494  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:36.513808  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:40.247341  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:40.258231  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:40.277091  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.277115  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:40.277170  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:40.295536  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.295559  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:40.295604  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:40.312997  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.313026  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:40.313071  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:40.330525  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.330546  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:40.330589  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:40.348713  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.348742  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:40.348800  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:40.366775  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.366797  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:40.366841  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:40.385643  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.385663  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:40.385707  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:40.403427  455078 logs.go:276] 0 containers: []
	W0216 17:41:40.403450  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:40.403459  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:40.403470  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:40.439890  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:40.439928  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:40.462737  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:18 old-k8s-version-478853 kubelet[1655]: E0216 17:41:18.091189    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.466398  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:20 old-k8s-version-478853 kubelet[1655]: E0216 17:41:20.090110    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.470658  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:22 old-k8s-version-478853 kubelet[1655]: E0216 17:41:22.090033    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.479019  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.482450  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.487453  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.489577  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.500740  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.501258  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:40.501276  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:40.525173  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:40.525207  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:40.587517  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:40.587539  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:40.587555  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:40.603528  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603556  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:40.603611  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:40.603623  455078 out.go:239]   Feb 16 17:41:27 old-k8s-version-478853 kubelet[1655]: E0216 17:41:27.090849    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:40.603636  455078 out.go:239]   Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:40.603652  455078 out.go:239]   Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:40.603661  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:40.603670  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:41:40.603681  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:40.603689  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:38.514785  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:41.013816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:43.013997  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:45.514713  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:50.604423  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:41:50.614773  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:41:50.632046  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.632072  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:41:50.632120  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:41:50.649668  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.649705  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:41:50.649752  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:41:50.667298  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.667324  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:41:50.667369  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:41:50.684964  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.684985  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:41:50.685058  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:41:50.702294  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.702315  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:41:50.702372  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:41:50.719213  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.719242  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:41:50.719298  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:41:50.739288  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.739316  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:41:50.739379  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:41:50.758688  455078 logs.go:276] 0 containers: []
	W0216 17:41:50.758711  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:41:50.758721  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:41:50.758733  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:41:50.778773  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:29 old-k8s-version-478853 kubelet[1655]: E0216 17:41:29.089698    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.784194  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:32 old-k8s-version-478853 kubelet[1655]: E0216 17:41:32.092561    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.786483  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.798383  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.801984  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.805643  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.807814  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.817121  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:41:50.817159  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:50.840704  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:41:50.840735  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:41:50.902600  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:41:50.902624  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:41:50.902661  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:41:50.920132  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:41:50.920249  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:41:50.959025  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959061  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:41:50.959128  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:41:50.959146  455078 out.go:239]   Feb 16 17:41:33 old-k8s-version-478853 kubelet[1655]: E0216 17:41:33.088847    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:41:50.959160  455078 out.go:239]   Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:41:50.959176  455078 out.go:239]   Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:41:50.959188  455078 out.go:239]   Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:41:50.959198  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:41:50.959208  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:41:50.959218  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:41:48.014075  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:50.015072  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:52.514476  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:54.514722  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:41:56.515088  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:00.960497  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:00.971191  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:00.988983  455078 logs.go:276] 0 containers: []
	W0216 17:42:00.989007  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:00.989051  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:01.007472  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.007502  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:01.007549  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:01.027235  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.027266  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:01.027328  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:01.045396  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.045418  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:01.045466  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:01.063608  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.063634  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:01.063676  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:01.081846  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.081875  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:01.081933  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:01.100572  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.100605  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:01.100656  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:01.118064  455078 logs.go:276] 0 containers: []
	W0216 17:42:01.118093  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:01.118107  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:01.118120  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:01.178472  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:01.178494  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:01.178510  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:01.194152  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:01.194180  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:01.229057  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:01.229088  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:01.252846  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:40 old-k8s-version-478853 kubelet[1655]: E0216 17:41:40.091198    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.256323  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:42 old-k8s-version-478853 kubelet[1655]: E0216 17:41:42.090258    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.259747  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:44 old-k8s-version-478853 kubelet[1655]: E0216 17:41:44.091098    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.261761  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.276222  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.278237  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.281914  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.283854  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.288825  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:01.288847  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:41:59.014730  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:01.019158  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:01.312195  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312226  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:01.312273  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:01.312283  455078 out.go:239]   Feb 16 17:41:45 old-k8s-version-478853 kubelet[1655]: E0216 17:41:45.090003    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:01.312293  455078 out.go:239]   Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:01.312302  455078 out.go:239]   Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:01.312314  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:01.312323  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:01.312330  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:01.312336  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:03.514449  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:06.013458  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:08.014136  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:10.014502  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:12.514123  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:11.313806  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:11.324599  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:11.342926  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.342950  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:11.343009  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:11.361832  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.361863  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:11.361913  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:11.380388  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.380413  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:11.380463  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:11.398531  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.398555  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:11.398609  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:11.416599  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.416633  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:11.416691  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:11.437302  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.437329  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:11.437381  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:11.455500  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.455526  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:11.455588  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:11.473447  455078 logs.go:276] 0 containers: []
	W0216 17:42:11.473472  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:11.473483  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:11.473499  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:11.489109  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:11.489137  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:11.528617  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:11.528657  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:11.554793  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:54 old-k8s-version-478853 kubelet[1655]: E0216 17:41:54.091046    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.556844  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:55 old-k8s-version-478853 kubelet[1655]: E0216 17:41:55.089887    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.560487  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.562461  455078 logs.go:138] Found kubelet problem: Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.577032  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.577534  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.584091  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.584767  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:11.584786  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:11.607897  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:11.607930  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:11.670359  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:11.670384  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670396  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:11.670447  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:11.670457  455078 out.go:239]   Feb 16 17:41:57 old-k8s-version-478853 kubelet[1655]: E0216 17:41:57.090052    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:11.670467  455078 out.go:239]   Feb 16 17:41:58 old-k8s-version-478853 kubelet[1655]: E0216 17:41:58.089244    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:11.670473  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:11.670480  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:11.670488  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:42:11.670494  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:11.670502  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:14.514921  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:17.013694  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:19.014567  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:21.513547  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:21.671639  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:21.682566  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:21.700727  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.700751  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:21.700797  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:21.718547  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.718575  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:21.718638  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:21.738352  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.738376  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:21.738422  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:21.758981  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.759006  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:21.759060  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:21.779871  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.779920  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:21.779989  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:21.799706  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.799736  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:21.799787  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:21.817228  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.817255  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:21.817308  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:21.836951  455078 logs.go:276] 0 containers: []
	W0216 17:42:21.836983  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:21.836997  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:21.837012  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:21.872431  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:21.872957  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:21.879827  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:21.881920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:21.893080  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:21.899070  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:21.899090  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:21.922375  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:21.922425  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:21.984024  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:21.984044  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:21.984056  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:22.000242  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:22.000273  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:22.038265  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038288  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:22.038331  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:22.038355  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.089165    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:22.038363  455078 out.go:239]   Feb 16 17:42:07 old-k8s-version-478853 kubelet[1655]: E0216 17:42:07.090276    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:22.038374  455078 out.go:239]   Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:22.038390  455078 out.go:239]   Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:22.038401  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:22.038411  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:22.038419  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:23.514418  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:26.014295  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:28.014697  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:30.514199  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:32.039537  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:32.050189  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:32.067646  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.067676  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:32.067745  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:32.087169  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.087213  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:32.087271  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:32.105465  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.105488  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:32.105546  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:32.123431  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.123464  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:32.123516  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:32.141039  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.141064  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:32.141122  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:32.159484  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.159515  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:32.159580  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:32.177162  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.177188  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:32.177241  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:32.194247  455078 logs.go:276] 0 containers: []
	W0216 17:42:32.194275  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:32.194287  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:32.194305  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:32.253876  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:32.253898  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:32.253912  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:32.270178  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:32.270213  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:32.305859  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:32.305889  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:32.328308  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:11 old-k8s-version-478853 kubelet[1655]: E0216 17:42:11.089648    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.330319  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:12 old-k8s-version-478853 kubelet[1655]: E0216 17:42:12.089069    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.341106  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.347725  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.352568  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.354544  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.364250  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.364558  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:32.364575  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:32.389634  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389668  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:32.389721  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:32.389734  455078 out.go:239]   Feb 16 17:42:18 old-k8s-version-478853 kubelet[1655]: E0216 17:42:18.089772    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:32.389744  455078 out.go:239]   Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:32.389754  455078 out.go:239]   Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:32.389762  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:32.389781  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:42:32.389791  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:32.389801  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:33.014200  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:35.514231  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:38.013671  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:40.014577  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:42.514921  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:42.390328  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:42.401227  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:42:42.419362  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.419393  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:42:42.419438  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:42:42.437451  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.437495  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:42:42.437554  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:42:42.455185  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.455206  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:42:42.455252  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:42:42.472418  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.472439  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:42:42.472493  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:42:42.489791  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.489818  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:42:42.489867  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:42:42.507633  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.507662  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:42:42.507716  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:42:42.526869  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.526889  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:42:42.526943  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:42:42.544969  455078 logs.go:276] 0 containers: []
	W0216 17:42:42.544999  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:42:42.545011  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:42:42.545026  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:42:42.570906  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:22 old-k8s-version-478853 kubelet[1655]: E0216 17:42:22.089623    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.575920  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:25 old-k8s-version-478853 kubelet[1655]: E0216 17:42:25.090857    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.577964  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.587726  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.592654  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.600845  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.602832  455078 logs.go:138] Found kubelet problem: Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.604949  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:42:42.604968  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:42:42.628966  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:42:42.629003  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:42:42.688286  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:42:42.688314  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:42:42.688331  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:42:42.704424  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:42:42.704453  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:42:42.742407  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742433  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:42:42.742493  455078 out.go:239] X Problems detected in kubelet:
	W0216 17:42:42.742501  455078 out.go:239]   Feb 16 17:42:26 old-k8s-version-478853 kubelet[1655]: E0216 17:42:26.089192    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:42:42.742508  455078 out.go:239]   Feb 16 17:42:32 old-k8s-version-478853 kubelet[1655]: E0216 17:42:32.092400    1655 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:42:42.742517  455078 out.go:239]   Feb 16 17:42:35 old-k8s-version-478853 kubelet[1655]: E0216 17:42:35.090202    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:42:42.742552  455078 out.go:239]   Feb 16 17:42:40 old-k8s-version-478853 kubelet[1655]: E0216 17:42:40.089571    1655 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:42:42.742559  455078 out.go:239]   Feb 16 17:42:41 old-k8s-version-478853 kubelet[1655]: E0216 17:42:41.088872    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:42:42.742565  455078 out.go:304] Setting ErrFile to fd 2...
	I0216 17:42:42.742570  455078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:42:45.013590  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:47.014437  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:49.014625  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:51.018229  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:52.743937  455078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:42:52.756372  455078 kubeadm.go:640] restartCluster took 4m18.22848465s
	W0216 17:42:52.756471  455078 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0216 17:42:52.756503  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:42:53.532102  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:42:53.543197  455078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:42:53.551917  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:42:53.552015  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:42:53.560427  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:42:53.560470  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:42:53.726076  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:42:53.785027  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:42:53.785263  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:42:53.865914  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:42:53.514433  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:55.514763  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:42:58.013778  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:00.014590  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:02.014950  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:04.514618  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:07.013926  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:09.014181  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:11.014763  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:13.515121  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:16.013918  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:18.514051  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:20.514306  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:22.514811  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:25.013525  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:27.013742  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:29.014501  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:31.513545  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:34.014031  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:36.014709  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:38.513816  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:40.514446  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:43.013427  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:45.014134  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:47.513936  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:49.514346  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:51.514431  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:54.014365  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:56.513375  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:43:58.513426  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:00.513773  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:02.514281  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:04.514604  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:07.013705  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:09.014230  421205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace has status "Ready":"False"
	I0216 17:44:11.013826  421205 pod_ready.go:81] duration metric: took 4m0.005667428s waiting for pod "metrics-server-57f55c9bc5-tdw8t" in "kube-system" namespace to be "Ready" ...
	E0216 17:44:11.013856  421205 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0216 17:44:11.013866  421205 pod_ready.go:38] duration metric: took 4m1.603526555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 17:44:11.013886  421205 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:44:11.013951  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:11.033314  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:11.033392  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:11.051441  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:11.051513  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:11.069761  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:11.069845  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:11.088208  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:11.088289  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:11.106422  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:11.106498  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:11.124867  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:11.124958  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:11.142118  421205 logs.go:276] 0 containers: []
	W0216 17:44:11.142141  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:11.142191  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:11.160173  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:11.160255  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:11.177989  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:11.178048  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:11.178059  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:11.198870  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:11.198912  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:11.239691  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:11.239723  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:11.260526  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:11.260555  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:11.289289  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:11.289322  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:11.309043  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:11.309071  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:11.332161  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:11.332193  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:11.358612  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:11.358647  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:11.416041  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:11.416086  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:11.509698  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:11.509732  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:11.536694  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:11.536723  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:11.592923  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:11.592963  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:11.687551  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:11.687590  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:14.209355  421205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:44:14.220881  421205 api_server.go:72] duration metric: took 4m7.100345078s to wait for apiserver process to appear ...
	I0216 17:44:14.220908  421205 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:44:14.220988  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:14.239026  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:14.239106  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:14.257252  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:14.257337  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:14.276104  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:14.276225  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:14.293861  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:14.293948  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:14.311926  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:14.312006  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:14.330376  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:14.330464  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:14.348311  421205 logs.go:276] 0 containers: []
	W0216 17:44:14.348340  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:14.348395  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:14.368016  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:14.368086  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:14.387308  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:14.387345  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:14.387355  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:14.476907  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:14.476947  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:14.500367  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:14.500402  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:14.526205  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:14.526246  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:14.552875  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:14.552907  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:14.611707  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:14.611748  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:14.632591  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:14.632617  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:14.688302  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:14.688332  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:14.791155  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:14.791187  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:14.821565  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:14.821602  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:14.842459  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:14.842490  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:14.862520  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:14.862546  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:14.884028  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:14.884059  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:17.424507  421205 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0216 17:44:17.428688  421205 api_server.go:279] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0216 17:44:17.429696  421205 api_server.go:141] control plane version: v1.28.4
	I0216 17:44:17.429714  421205 api_server.go:131] duration metric: took 3.208801048s to wait for apiserver health ...
	I0216 17:44:17.429722  421205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:44:17.429777  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:44:17.448521  421205 logs.go:276] 1 containers: [81cedd311576]
	I0216 17:44:17.448634  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:44:17.469956  421205 logs.go:276] 1 containers: [2cb16166baeb]
	I0216 17:44:17.470035  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:44:17.487435  421205 logs.go:276] 1 containers: [69361b065c2a]
	I0216 17:44:17.487517  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:44:17.509228  421205 logs.go:276] 1 containers: [a24a5700c6d2]
	I0216 17:44:17.509299  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:44:17.527465  421205 logs.go:276] 1 containers: [5e90a8c74405]
	I0216 17:44:17.527538  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:44:17.545275  421205 logs.go:276] 1 containers: [642332d4dcfa]
	I0216 17:44:17.545352  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:44:17.564370  421205 logs.go:276] 0 containers: []
	W0216 17:44:17.564392  421205 logs.go:278] No container was found matching "kindnet"
	I0216 17:44:17.564435  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:44:17.583087  421205 logs.go:276] 1 containers: [92a352db1498]
	I0216 17:44:17.583149  421205 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0216 17:44:17.601835  421205 logs.go:276] 1 containers: [b759a7f6ed7e]
	I0216 17:44:17.601871  421205 logs.go:123] Gathering logs for etcd [2cb16166baeb] ...
	I0216 17:44:17.601882  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb16166baeb"
	I0216 17:44:17.629168  421205 logs.go:123] Gathering logs for coredns [69361b065c2a] ...
	I0216 17:44:17.629205  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69361b065c2a"
	I0216 17:44:17.649281  421205 logs.go:123] Gathering logs for kube-proxy [5e90a8c74405] ...
	I0216 17:44:17.649307  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e90a8c74405"
	I0216 17:44:17.670885  421205 logs.go:123] Gathering logs for container status ...
	I0216 17:44:17.670920  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:44:17.725719  421205 logs.go:123] Gathering logs for kube-controller-manager [642332d4dcfa] ...
	I0216 17:44:17.725751  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642332d4dcfa"
	I0216 17:44:17.765369  421205 logs.go:123] Gathering logs for kube-apiserver [81cedd311576] ...
	I0216 17:44:17.765410  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81cedd311576"
	I0216 17:44:17.799148  421205 logs.go:123] Gathering logs for kube-scheduler [a24a5700c6d2] ...
	I0216 17:44:17.799187  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24a5700c6d2"
	I0216 17:44:17.826035  421205 logs.go:123] Gathering logs for kubernetes-dashboard [92a352db1498] ...
	I0216 17:44:17.826071  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a352db1498"
	I0216 17:44:17.847880  421205 logs.go:123] Gathering logs for storage-provisioner [b759a7f6ed7e] ...
	I0216 17:44:17.847914  421205 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b759a7f6ed7e"
	I0216 17:44:17.868797  421205 logs.go:123] Gathering logs for Docker ...
	I0216 17:44:17.868824  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:44:17.926736  421205 logs.go:123] Gathering logs for kubelet ...
	I0216 17:44:17.926772  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 17:44:18.020846  421205 logs.go:123] Gathering logs for dmesg ...
	I0216 17:44:18.020884  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:44:18.046687  421205 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:44:18.046726  421205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0216 17:44:20.647381  421205 system_pods.go:59] 8 kube-system pods found
	I0216 17:44:20.647412  421205 system_pods.go:61] "coredns-5dd5756b68-6dd5s" [64070971-4c96-4bae-8c6a-e661926c6fc2] Running
	I0216 17:44:20.647420  421205 system_pods.go:61] "etcd-default-k8s-diff-port-816748" [08890543-29f8-4ada-8e5b-9f6d7867eb3c] Running
	I0216 17:44:20.647426  421205 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-816748" [86f78562-715d-466e-94c1-b3a76772ec12] Running
	I0216 17:44:20.647432  421205 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-816748" [5f311605-3fd2-4f1e-b0fa-cab39e6a86d2] Running
	I0216 17:44:20.647437  421205 system_pods.go:61] "kube-proxy-f7czt" [0f96b293-f1b0-42e8-b281-afae41342cf9] Running
	I0216 17:44:20.647442  421205 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-816748" [1366c3ec-1790-4ff3-b3aa-bb9dfe5b719a] Running
	I0216 17:44:20.647452  421205 system_pods.go:61] "metrics-server-57f55c9bc5-tdw8t" [5b4055e5-de9d-40e3-af47-591d406323be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:44:20.647460  421205 system_pods.go:61] "storage-provisioner" [756405eb-38f6-4c3e-9834-ef4f519f42ef] Running
	I0216 17:44:20.647471  421205 system_pods.go:74] duration metric: took 3.217742634s to wait for pod list to return data ...
	I0216 17:44:20.647482  421205 default_sa.go:34] waiting for default service account to be created ...
	I0216 17:44:20.649822  421205 default_sa.go:45] found service account: "default"
	I0216 17:44:20.649843  421205 default_sa.go:55] duration metric: took 2.354783ms for default service account to be created ...
	I0216 17:44:20.649851  421205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 17:44:20.654326  421205 system_pods.go:86] 8 kube-system pods found
	I0216 17:44:20.654349  421205 system_pods.go:89] "coredns-5dd5756b68-6dd5s" [64070971-4c96-4bae-8c6a-e661926c6fc2] Running
	I0216 17:44:20.654354  421205 system_pods.go:89] "etcd-default-k8s-diff-port-816748" [08890543-29f8-4ada-8e5b-9f6d7867eb3c] Running
	I0216 17:44:20.654359  421205 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-816748" [86f78562-715d-466e-94c1-b3a76772ec12] Running
	I0216 17:44:20.654364  421205 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-816748" [5f311605-3fd2-4f1e-b0fa-cab39e6a86d2] Running
	I0216 17:44:20.654368  421205 system_pods.go:89] "kube-proxy-f7czt" [0f96b293-f1b0-42e8-b281-afae41342cf9] Running
	I0216 17:44:20.654372  421205 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-816748" [1366c3ec-1790-4ff3-b3aa-bb9dfe5b719a] Running
	I0216 17:44:20.654379  421205 system_pods.go:89] "metrics-server-57f55c9bc5-tdw8t" [5b4055e5-de9d-40e3-af47-591d406323be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 17:44:20.654388  421205 system_pods.go:89] "storage-provisioner" [756405eb-38f6-4c3e-9834-ef4f519f42ef] Running
	I0216 17:44:20.654398  421205 system_pods.go:126] duration metric: took 4.542164ms to wait for k8s-apps to be running ...
	I0216 17:44:20.654408  421205 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 17:44:20.654451  421205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:44:20.665886  421205 system_svc.go:56] duration metric: took 11.471113ms WaitForService to wait for kubelet.
	I0216 17:44:20.665915  421205 kubeadm.go:581] duration metric: took 4m13.545386541s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 17:44:20.665948  421205 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:44:20.668869  421205 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0216 17:44:20.668894  421205 node_conditions.go:123] node cpu capacity is 8
	I0216 17:44:20.668909  421205 node_conditions.go:105] duration metric: took 2.9556ms to run NodePressure ...
	I0216 17:44:20.668922  421205 start.go:228] waiting for startup goroutines ...
	I0216 17:44:20.668931  421205 start.go:233] waiting for cluster config update ...
	I0216 17:44:20.668948  421205 start.go:242] writing updated cluster config ...
	I0216 17:44:20.669255  421205 ssh_runner.go:195] Run: rm -f paused
	I0216 17:44:20.718390  421205 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 17:44:20.720392  421205 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-816748" cluster and "default" namespace by default
	I0216 17:46:54.897764  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:46:54.897901  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:46:54.900889  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:54.900952  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:54.901057  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:54.901118  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:54.901164  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:54.901258  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:54.901344  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:54.901414  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:54.901483  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:54.901549  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:54.901599  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:54.901645  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:54.901736  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:54.901873  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:54.902013  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:54.902166  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:54.902269  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:54.902349  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:54.902439  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:54.905049  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:54.905136  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:54.905209  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:54.905290  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:54.905360  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:54.905435  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:54.905485  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:54.905549  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:54.905608  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:54.905668  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:54.905730  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:54.905789  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:54.905857  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:54.905899  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:54.905946  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:54.905996  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:54.906054  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:54.906113  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:54.908366  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:54.908451  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:54.908521  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:54.908576  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:54.908644  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:54.908802  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:46:54.908855  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:46:54.908861  455078 kubeadm.go:322] 
	I0216 17:46:54.908893  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:46:54.908926  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:46:54.908932  455078 kubeadm.go:322] 
	I0216 17:46:54.908967  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:46:54.908996  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:46:54.909083  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:46:54.909090  455078 kubeadm.go:322] 
	I0216 17:46:54.909170  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:46:54.909199  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:46:54.909225  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:46:54.909231  455078 kubeadm.go:322] 
	I0216 17:46:54.909312  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:46:54.909392  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:46:54.909464  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:46:54.909509  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:46:54.909573  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:46:54.909628  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0216 17:46:54.909766  455078 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:46:54.909815  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:46:55.653997  455078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:46:55.665110  455078 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:46:55.665171  455078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:46:55.673735  455078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:46:55.673786  455078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:46:55.722375  455078 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:46:55.722432  455078 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:46:55.894761  455078 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:46:55.894856  455078 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0216 17:46:55.894909  455078 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:46:55.894973  455078 kubeadm.go:322] OS: Linux
	I0216 17:46:55.895037  455078 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:46:55.895101  455078 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:46:55.895159  455078 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:46:55.895220  455078 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:46:55.895285  455078 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:46:55.895341  455078 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:46:55.967714  455078 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:46:55.967839  455078 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:46:55.967958  455078 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:46:56.138307  455078 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:46:56.139389  455078 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:46:56.146473  455078 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:46:56.222590  455078 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:46:56.225987  455078 out.go:204]   - Generating certificates and keys ...
	I0216 17:46:56.226094  455078 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:46:56.226182  455078 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:46:56.226277  455078 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:46:56.226364  455078 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:46:56.226459  455078 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:46:56.226532  455078 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:46:56.226620  455078 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:46:56.226731  455078 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:46:56.226833  455078 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:46:56.226958  455078 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:46:56.227020  455078 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:46:56.227109  455078 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:46:56.394947  455078 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:46:56.547719  455078 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:46:56.909016  455078 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:46:57.118906  455078 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:46:57.119703  455078 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:46:57.121695  455078 out.go:204]   - Booting up control plane ...
	I0216 17:46:57.121837  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:46:57.126402  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:46:57.127880  455078 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:46:57.128910  455078 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:46:57.132135  455078 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:47:37.132515  455078 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:50:57.133720  455078 kubeadm.go:322] 
	I0216 17:50:57.133814  455078 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:50:57.133878  455078 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:50:57.133889  455078 kubeadm.go:322] 
	I0216 17:50:57.133928  455078 kubeadm.go:322] This error is likely caused by:
	I0216 17:50:57.133973  455078 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:50:57.134138  455078 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:50:57.134168  455078 kubeadm.go:322] 
	I0216 17:50:57.134317  455078 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:50:57.134386  455078 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:50:57.134454  455078 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:50:57.134477  455078 kubeadm.go:322] 
	I0216 17:50:57.134600  455078 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:50:57.134682  455078 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:50:57.134772  455078 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:50:57.134854  455078 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:50:57.134948  455078 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:50:57.134989  455078 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:50:57.136987  455078 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:50:57.137100  455078 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:50:57.137301  455078 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0216 17:50:57.137405  455078 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:50:57.137479  455078 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:50:57.137562  455078 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:50:57.137603  455078 kubeadm.go:406] StartCluster complete in 12m22.638718493s
	I0216 17:50:57.137690  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:50:57.155966  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.155994  455078 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:50:57.156042  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:50:57.173312  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.173339  455078 logs.go:278] No container was found matching "etcd"
	I0216 17:50:57.173395  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:50:57.190861  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.190885  455078 logs.go:278] No container was found matching "coredns"
	I0216 17:50:57.190939  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:50:57.208223  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.208245  455078 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:50:57.208292  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:50:57.224808  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.224835  455078 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:50:57.224887  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:50:57.242004  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.242026  455078 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:50:57.242066  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:50:57.258500  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.258522  455078 logs.go:278] No container was found matching "kindnet"
	I0216 17:50:57.258562  455078 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:50:57.275390  455078 logs.go:276] 0 containers: []
	W0216 17:50:57.275415  455078 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:50:57.275427  455078 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:50:57.275443  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:50:57.336885  455078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:50:57.336911  455078 logs.go:123] Gathering logs for Docker ...
	I0216 17:50:57.336929  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:50:57.354268  455078 logs.go:123] Gathering logs for container status ...
	I0216 17:50:57.354298  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:50:57.388996  455078 logs.go:123] Gathering logs for kubelet ...
	I0216 17:50:57.389022  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:50:57.410914  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.418232  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.420274  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.423841  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:43 old-k8s-version-478853 kubelet[11238]: E0216 17:50:43.867344   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:50:57.433982  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:49 old-k8s-version-478853 kubelet[11238]: E0216 17:50:49.865840   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:50:57.437556  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:51 old-k8s-version-478853 kubelet[11238]: E0216 17:50:51.865653   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:50:57.446171  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.867671   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:50:57.446448  455078 logs.go:138] Found kubelet problem: Feb 16 17:50:56 old-k8s-version-478853 kubelet[11238]: E0216 17:50:56.868767   11238 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-old-k8s-version-478853_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:50:57.447246  455078 logs.go:123] Gathering logs for dmesg ...
	I0216 17:50:57.447271  455078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 17:50:57.472300  455078 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:50:57.472350  455078 out.go:239] * 
	W0216 17:50:57.472421  455078 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.472446  455078 out.go:239] * 
	W0216 17:50:57.473265  455078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:50:57.475359  455078 out.go:177] X Problems detected in kubelet:
	I0216 17:50:57.477187  455078 out.go:177]   Feb 16 17:50:36 old-k8s-version-478853 kubelet[11238]: E0216 17:50:36.867626   11238 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-478853_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:50:57.478538  455078 out.go:177]   Feb 16 17:50:40 old-k8s-version-478853 kubelet[11238]: E0216 17:50:40.868238   11238 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-478853_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:50:57.479997  455078 out.go:177]   Feb 16 17:50:41 old-k8s-version-478853 kubelet[11238]: E0216 17:50:41.867498   11238 pod_workers.go:191] Error syncing pod 75eac6fd65f4f8477f5572974a7da828 ("etcd-old-k8s-version-478853_kube-system(75eac6fd65f4f8477f5572974a7da828)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:50:57.482565  455078 out.go:177] 
	W0216 17:50:57.483906  455078 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:50:57.483958  455078 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:50:57.483983  455078 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:50:57.485600  455078 out.go:177] 
	
	
	==> Docker <==
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.592980618Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.594584628Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[849]: time="2024-02-16T17:38:30.595536484Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:38:30 old-k8s-version-478853 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:30.645142910Z" level=info msg="Starting up"
	Feb 16 17:38:30 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:30.665356524Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:38:32 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:32.943709848Z" level=info msg="Loading containers: start."
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.046603047Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.084755081Z" level=info msg="Loading containers: done."
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.093893706Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.093969854Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.114320129Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:38:33 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:38:33.114404690Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:38:33 old-k8s-version-478853 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.297314929Z" level=info msg="ignoring event" container=e2af60e34ffaad5efd27998301557aa7bc6eafb37879f3641ec191f87756d240 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.365203834Z" level=info msg="ignoring event" container=501f90c2772906bc6d8ded9653807e77cb8a8a92587ad8fe1491c9b9c0875e6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.429498715Z" level=info msg="ignoring event" container=f872d0e5597d4ff659d8ce99042c5e1e430481e415e0287b6c0b970158121faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:42:53 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:42:53.494454551Z" level=info msg="ignoring event" container=f19e6b2d39c6061bb413cdfe4fadfa71b989ba84c976ad403d332b29446cb4fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.423953515Z" level=info msg="ignoring event" container=b1b1b40b37050624d9c0b249cdca8e460ccce350d500ee9689e7d0b2f1a6d93d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.490441340Z" level=info msg="ignoring event" container=5cae44ae4a1b017120f0ee3d1e2fb8e897a46c84d4f5ecb92082ff2491dee106 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.553837099Z" level=info msg="ignoring event" container=cdda83e2154a7e2eb9b7f5b60fd5ba82cffbef69661670436c524e6d68f1aa40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:46:55 old-k8s-version-478853 dockerd[1072]: time="2024-02-16T17:46:55.620386107Z" level=info msg="ignoring event" container=6783776bc128111b7a739f1f7b7bbc1ce484483b75360855dc6ec8cbeecc9c7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e a8 fe f3 03 85 08 06
	[Feb16 17:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba d4 5b d6 50 19 08 06
	[Feb16 17:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 c0 9b 14 00 15 08 06
	[Feb16 17:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 bc 63 d6 82 6d 08 06
	[Feb16 17:35] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 2e 9d 9f f9 35 08 06
	[Feb16 17:36] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 58 28 6b 8d e8 08 06
	[  +2.713951] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 dc a2 ed 93 ee 08 06
	[  +9.193699] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca a1 30 ea 88 7e 08 06
	[  +0.019629] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 7a 9f 93 dd d6 08 06
	[Feb16 17:37] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e de b5 78 ba 0d 08 06
	[Feb16 17:38] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea c2 b1 8d 0f 93 08 06
	[Feb16 17:40] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 d2 9f 93 96 cc 08 06
	[ +10.846771] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 5a c7 81 70 16 08 06
	
	
	==> kernel <==
	 17:58:02 up  1:40,  0 users,  load average: 0.03, 0.08, 0.63
	Linux old-k8s-version-478853 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.608260   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.708554   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.797134   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.808781   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.909053   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:00 old-k8s-version-478853 kubelet[11238]: E0216 17:58:00.996967   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.009313   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.109510   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.197834   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.209712   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.309865   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.397938   11238 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.410059   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.510245   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.597881   11238 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.610492   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.710685   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.797891   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.810894   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.911146   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:01 old-k8s-version-478853 kubelet[11238]: E0216 17:58:01.997779   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:02 old-k8s-version-478853 kubelet[11238]: E0216 17:58:02.011361   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:02 old-k8s-version-478853 kubelet[11238]: E0216 17:58:02.111560   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	Feb 16 17:58:02 old-k8s-version-478853 kubelet[11238]: E0216 17:58:02.198508   11238 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-478853&limit=500&resourceVersion=0: dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:58:02 old-k8s-version-478853 kubelet[11238]: E0216 17:58:02.211781   11238 kubelet.go:2267] node "old-k8s-version-478853" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 2 (283.319473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-478853" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (423.58s)

                                                
                                    

Test pass (299/331)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 33.5
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.21
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 14.51
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.2
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 38.49
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 1.23
30 TestBinaryMirror 0.73
31 TestOffline 89.99
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 133.22
38 TestAddons/parallel/Registry 15.65
39 TestAddons/parallel/Ingress 19.14
40 TestAddons/parallel/InspektorGadget 10.61
41 TestAddons/parallel/MetricsServer 5.62
42 TestAddons/parallel/HelmTiller 10.47
44 TestAddons/parallel/CSI 61.66
45 TestAddons/parallel/Headlamp 16.3
46 TestAddons/parallel/CloudSpanner 5.73
47 TestAddons/parallel/LocalPath 54.44
48 TestAddons/parallel/NvidiaDevicePlugin 5.58
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 11.03
54 TestCertOptions 29.73
55 TestCertExpiration 238.26
56 TestDockerFlags 28.7
57 TestForceSystemdFlag 34.01
58 TestForceSystemdEnv 33.5
60 TestKVMDriverInstallOrUpdate 4.98
64 TestErrorSpam/setup 22.34
65 TestErrorSpam/start 0.63
66 TestErrorSpam/status 0.91
67 TestErrorSpam/pause 1.17
68 TestErrorSpam/unpause 1.24
69 TestErrorSpam/stop 1.89
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 42.53
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 39.06
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.3
81 TestFunctional/serial/CacheCmd/cache/add_local 1.48
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.32
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 38.97
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.02
92 TestFunctional/serial/LogsFileCmd 1.06
93 TestFunctional/serial/InvalidService 4.11
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 10.05
97 TestFunctional/parallel/DryRun 0.48
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.14
103 TestFunctional/parallel/ServiceCmdConnect 10.58
104 TestFunctional/parallel/AddonsCmd 0.24
105 TestFunctional/parallel/PersistentVolumeClaim 34.86
107 TestFunctional/parallel/SSHCmd 0.57
108 TestFunctional/parallel/CpCmd 1.95
109 TestFunctional/parallel/MySQL 28.71
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 1.7
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
119 TestFunctional/parallel/License 0.62
120 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.27
126 TestFunctional/parallel/ServiceCmd/List 0.61
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
129 TestFunctional/parallel/ServiceCmd/Format 0.49
130 TestFunctional/parallel/ServiceCmd/URL 0.49
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
132 TestFunctional/parallel/MountCmd/any-port 8.3
133 TestFunctional/parallel/ProfileCmd/profile_list 0.35
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/Version/short 0.06
142 TestFunctional/parallel/Version/components 0.53
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
147 TestFunctional/parallel/ImageCommands/ImageBuild 4.62
148 TestFunctional/parallel/ImageCommands/Setup 2.01
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.79
150 TestFunctional/parallel/DockerEnv/bash 1.01
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.06
155 TestFunctional/parallel/MountCmd/specific-port 1.8
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.06
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.71
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.27
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.01
162 TestFunctional/delete_addon-resizer_images 0.07
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestImageBuild/serial/Setup 25
169 TestImageBuild/serial/NormalBuild 2.43
170 TestImageBuild/serial/BuildWithBuildArg 0.9
171 TestImageBuild/serial/BuildWithDockerIgnore 0.72
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
181 TestJSONOutput/start/Command 41.08
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.53
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.51
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 10.86
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 25.83
207 TestKicCustomNetwork/use_default_bridge_network 28.56
208 TestKicExistingNetwork 25.02
209 TestKicCustomSubnet 27.06
210 TestKicStaticIP 27.73
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 56.33
215 TestMountStart/serial/StartWithMountFirst 6.98
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 9.74
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.46
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.18
222 TestMountStart/serial/RestartStopped 8.08
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 65.56
227 TestMultiNode/serial/DeployApp2Nodes 41.43
228 TestMultiNode/serial/PingHostFrom2Pods 0.82
229 TestMultiNode/serial/AddNode 15.74
230 TestMultiNode/serial/MultiNodeLabels 0.08
231 TestMultiNode/serial/ProfileList 0.37
232 TestMultiNode/serial/CopyFile 9.64
233 TestMultiNode/serial/StopNode 2.16
234 TestMultiNode/serial/StartAfterStop 11.69
235 TestMultiNode/serial/RestartKeepsNodes 116.93
236 TestMultiNode/serial/DeleteNode 4.74
237 TestMultiNode/serial/StopMultiNode 21.44
238 TestMultiNode/serial/RestartMultiNode 56.84
239 TestMultiNode/serial/ValidateNameConflict 27.24
244 TestPreload 182
246 TestScheduledStopUnix 95.43
247 TestSkaffold 120.63
249 TestInsufficientStorage 13.34
250 TestRunningBinaryUpgrade 121.64
253 TestMissingContainerUpgrade 140.21
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 35.18
268 TestNoKubernetes/serial/StartWithStopK8s 10.85
269 TestNoKubernetes/serial/Start 6.98
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
271 TestNoKubernetes/serial/ProfileList 1.45
272 TestNoKubernetes/serial/Stop 1.21
273 TestNoKubernetes/serial/StartNoArgs 7.74
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
275 TestStoppedBinaryUpgrade/Setup 2.49
276 TestStoppedBinaryUpgrade/Upgrade 68.97
278 TestPause/serial/Start 80.73
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
287 TestNetworkPlugins/group/auto/Start 69.75
288 TestNetworkPlugins/group/kindnet/Start 53.17
289 TestPause/serial/SecondStartNoReconfiguration 34.22
290 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
291 TestNetworkPlugins/group/auto/KubeletFlags 0.28
292 TestNetworkPlugins/group/auto/NetCatPod 9.22
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
294 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
295 TestNetworkPlugins/group/auto/DNS 0.13
296 TestNetworkPlugins/group/auto/Localhost 0.11
297 TestNetworkPlugins/group/auto/HairPin 0.11
298 TestPause/serial/Pause 0.49
299 TestPause/serial/VerifyStatus 0.3
300 TestPause/serial/Unpause 0.47
301 TestNetworkPlugins/group/kindnet/DNS 0.15
302 TestNetworkPlugins/group/kindnet/Localhost 0.14
303 TestPause/serial/PauseAgain 0.74
304 TestNetworkPlugins/group/kindnet/HairPin 0.18
305 TestPause/serial/DeletePaused 2.14
306 TestPause/serial/VerifyDeletedResources 0.67
307 TestNetworkPlugins/group/calico/Start 72.23
308 TestNetworkPlugins/group/custom-flannel/Start 54.33
309 TestNetworkPlugins/group/false/Start 80.08
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/custom-flannel/DNS 0.13
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
316 TestNetworkPlugins/group/calico/KubeletFlags 0.29
317 TestNetworkPlugins/group/calico/NetCatPod 9.22
318 TestNetworkPlugins/group/calico/DNS 0.15
319 TestNetworkPlugins/group/calico/Localhost 0.12
320 TestNetworkPlugins/group/calico/HairPin 0.12
321 TestNetworkPlugins/group/enable-default-cni/Start 79
322 TestNetworkPlugins/group/false/KubeletFlags 0.34
323 TestNetworkPlugins/group/false/NetCatPod 12.22
324 TestNetworkPlugins/group/flannel/Start 55.91
325 TestNetworkPlugins/group/false/DNS 0.14
326 TestNetworkPlugins/group/false/Localhost 0.14
327 TestNetworkPlugins/group/false/HairPin 0.13
328 TestNetworkPlugins/group/bridge/Start 80.62
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
331 TestNetworkPlugins/group/flannel/NetCatPod 9.18
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
334 TestNetworkPlugins/group/flannel/DNS 0.13
335 TestNetworkPlugins/group/flannel/Localhost 0.12
336 TestNetworkPlugins/group/flannel/HairPin 0.12
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
340 TestNetworkPlugins/group/kubenet/Start 42.69
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
344 TestNetworkPlugins/group/bridge/NetCatPod 8.25
345 TestNetworkPlugins/group/bridge/DNS 0.14
346 TestNetworkPlugins/group/bridge/Localhost 0.12
347 TestNetworkPlugins/group/bridge/HairPin 0.13
349 TestStartStop/group/no-preload/serial/FirstStart 113.62
350 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
351 TestNetworkPlugins/group/kubenet/NetCatPod 10.19
352 TestNetworkPlugins/group/kubenet/DNS 0.16
353 TestNetworkPlugins/group/kubenet/Localhost 0.11
354 TestNetworkPlugins/group/kubenet/HairPin 0.11
356 TestStartStop/group/embed-certs/serial/FirstStart 74.83
357 TestStartStop/group/embed-certs/serial/DeployApp 9.24
358 TestStartStop/group/no-preload/serial/DeployApp 10.25
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
360 TestStartStop/group/embed-certs/serial/Stop 10.85
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
362 TestStartStop/group/no-preload/serial/Stop 10.76
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
364 TestStartStop/group/embed-certs/serial/SecondStart 587.41
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/no-preload/serial/SecondStart 332.99
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.27
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.65
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 558.15
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.01
377 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
379 TestStartStop/group/no-preload/serial/Pause 2.48
381 TestStartStop/group/newest-cni/serial/FirstStart 37.69
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
384 TestStartStop/group/newest-cni/serial/Stop 9.68
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
386 TestStartStop/group/newest-cni/serial/SecondStart 27.23
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/newest-cni/serial/Pause 2.51
391 TestStartStop/group/old-k8s-version/serial/Stop 1.2
392 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
397 TestStartStop/group/embed-certs/serial/Pause 2.44
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.4
x
+
TestDownloadOnly/v1.16.0/json-events (33.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-314878 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-314878 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (33.500472312s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (33.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-314878
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-314878: exit status 85 (70.294491ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-314878 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |          |
	|         | -p download-only-314878        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:41:28
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:41:28.627839   13631 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:41:28.627937   13631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:28.627943   13631 out.go:304] Setting ErrFile to fd 2...
	I0216 16:41:28.627950   13631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:28.628137   13631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	W0216 16:41:28.628306   13631 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17936-6821/.minikube/config/config.json: open /home/jenkins/minikube-integration/17936-6821/.minikube/config/config.json: no such file or directory
	I0216 16:41:28.628927   13631 out.go:298] Setting JSON to true
	I0216 16:41:28.629777   13631 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1435,"bootTime":1708100254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:41:28.629830   13631 start.go:139] virtualization: kvm guest
	I0216 16:41:28.632332   13631 out.go:97] [download-only-314878] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 16:41:28.633984   13631 out.go:169] MINIKUBE_LOCATION=17936
	W0216 16:41:28.632431   13631 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball: no such file or directory
	I0216 16:41:28.632475   13631 notify.go:220] Checking for updates...
	I0216 16:41:28.636655   13631 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:41:28.637938   13631 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:41:28.639354   13631 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:41:28.640663   13631 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0216 16:41:28.643332   13631 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:41:28.643533   13631 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:41:28.665409   13631 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:41:28.665520   13631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:29.009131   13631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-16 16:41:29.000240436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:41:29.009278   13631 docker.go:295] overlay module found
	I0216 16:41:29.010874   13631 out.go:97] Using the docker driver based on user configuration
	I0216 16:41:29.010897   13631 start.go:299] selected driver: docker
	I0216 16:41:29.010902   13631 start.go:903] validating driver "docker" against <nil>
	I0216 16:41:29.010967   13631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:29.060297   13631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-16 16:41:29.052366936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:41:29.060446   13631 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:41:29.060948   13631 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0216 16:41:29.061129   13631 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:41:29.062866   13631 out.go:169] Using Docker driver with root privileges
	I0216 16:41:29.064103   13631 cni.go:84] Creating CNI manager for ""
	I0216 16:41:29.064129   13631 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:41:29.064141   13631 start_flags.go:323] config:
	{Name:download-only-314878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-314878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:41:29.065594   13631 out.go:97] Starting control plane node download-only-314878 in cluster download-only-314878
	I0216 16:41:29.065615   13631 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:41:29.066943   13631 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:41:29.066963   13631 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 16:41:29.067057   13631 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:41:29.082019   13631 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:41:29.082170   13631 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:41:29.082250   13631 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:41:29.165986   13631 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 16:41:29.166016   13631 cache.go:56] Caching tarball of preloaded images
	I0216 16:41:29.166141   13631 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 16:41:29.167858   13631 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0216 16:41:29.167871   13631 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:41:29.274133   13631 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 16:41:39.149683   13631 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:41:39.149786   13631 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:41:39.917243   13631 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 16:41:39.917670   13631 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/download-only-314878/config.json ...
	I0216 16:41:39.917712   13631 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/download-only-314878/config.json: {Name:mk2477f55683e5d7f5e84bd7daa73396eafb8bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:41:39.917886   13631 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 16:41:39.918082   13631 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0216 16:41:43.352040   13631 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-314878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-314878
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-966478 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-966478 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.506528663s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-966478
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-966478: exit status 85 (71.610588ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-314878 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-314878        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-314878        | download-only-314878 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | -o=json --download-only        | download-only-966478 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | -p download-only-966478        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:42:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:42:02.557062   13973 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:42:02.557300   13973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:02.557308   13973 out.go:304] Setting ErrFile to fd 2...
	I0216 16:42:02.557312   13973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:02.557504   13973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:42:02.558089   13973 out.go:298] Setting JSON to true
	I0216 16:42:02.558927   13973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1469,"bootTime":1708100254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:42:02.558995   13973 start.go:139] virtualization: kvm guest
	I0216 16:42:02.561302   13973 out.go:97] [download-only-966478] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 16:42:02.563050   13973 out.go:169] MINIKUBE_LOCATION=17936
	I0216 16:42:02.561473   13973 notify.go:220] Checking for updates...
	I0216 16:42:02.566293   13973 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:42:02.567939   13973 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:42:02.569465   13973 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:42:02.571009   13973 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0216 16:42:02.573801   13973 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:42:02.574005   13973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:42:02.595834   13973 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:42:02.595922   13973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:02.653713   13973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-16 16:42:02.645376635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:42:02.653803   13973 docker.go:295] overlay module found
	I0216 16:42:02.655702   13973 out.go:97] Using the docker driver based on user configuration
	I0216 16:42:02.655733   13973 start.go:299] selected driver: docker
	I0216 16:42:02.655740   13973 start.go:903] validating driver "docker" against <nil>
	I0216 16:42:02.655843   13973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:02.705538   13973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-16 16:42:02.69709071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:42:02.705731   13973 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:42:02.706373   13973 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0216 16:42:02.706570   13973 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:42:02.708690   13973 out.go:169] Using Docker driver with root privileges
	I0216 16:42:02.710293   13973 cni.go:84] Creating CNI manager for ""
	I0216 16:42:02.710327   13973 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:42:02.710340   13973 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 16:42:02.710355   13973 start_flags.go:323] config:
	{Name:download-only-966478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-966478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:42:02.712122   13973 out.go:97] Starting control plane node download-only-966478 in cluster download-only-966478
	I0216 16:42:02.712147   13973 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:42:02.714047   13973 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:42:02.714077   13973 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:42:02.714203   13973 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:42:02.729700   13973 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:42:02.729819   13973 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:42:02.729837   13973 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 16:42:02.729841   13973 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 16:42:02.729847   13973 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 16:42:02.811719   13973 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 16:42:02.811753   13973 cache.go:56] Caching tarball of preloaded images
	I0216 16:42:02.811925   13973 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:42:02.814012   13973 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0216 16:42:02.814042   13973 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:42:02.923041   13973 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-966478"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-966478
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (38.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-591766 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-591766 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (38.494055327s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (38.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-591766
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-591766: exit status 85 (75.300011ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-314878 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-314878           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-314878           | download-only-314878 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | -o=json --download-only           | download-only-966478 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | -p download-only-966478           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-966478           | download-only-966478 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | -o=json --download-only           | download-only-591766 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | -p download-only-591766           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:42:17
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:42:17.476626   14280 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:42:17.476738   14280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:17.476743   14280 out.go:304] Setting ErrFile to fd 2...
	I0216 16:42:17.476748   14280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:17.476919   14280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:42:17.477503   14280 out.go:298] Setting JSON to true
	I0216 16:42:17.478335   14280 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1484,"bootTime":1708100254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:42:17.478396   14280 start.go:139] virtualization: kvm guest
	I0216 16:42:17.480374   14280 out.go:97] [download-only-591766] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 16:42:17.481887   14280 out.go:169] MINIKUBE_LOCATION=17936
	I0216 16:42:17.480552   14280 notify.go:220] Checking for updates...
	I0216 16:42:17.484528   14280 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:42:17.485885   14280 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:42:17.487179   14280 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:42:17.488493   14280 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0216 16:42:17.490952   14280 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:42:17.491196   14280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:42:17.511086   14280 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:42:17.511187   14280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:17.559821   14280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-16 16:42:17.55065876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:42:17.560003   14280 docker.go:295] overlay module found
	I0216 16:42:17.561911   14280 out.go:97] Using the docker driver based on user configuration
	I0216 16:42:17.561943   14280 start.go:299] selected driver: docker
	I0216 16:42:17.561949   14280 start.go:903] validating driver "docker" against <nil>
	I0216 16:42:17.562033   14280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:17.615059   14280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-16 16:42:17.603906399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:42:17.615218   14280 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:42:17.615904   14280 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0216 16:42:17.616106   14280 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:42:17.617768   14280 out.go:169] Using Docker driver with root privileges
	I0216 16:42:17.618999   14280 cni.go:84] Creating CNI manager for ""
	I0216 16:42:17.619032   14280 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:42:17.619044   14280 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 16:42:17.619054   14280 start_flags.go:323] config:
	{Name:download-only-591766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-591766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:42:17.620463   14280 out.go:97] Starting control plane node download-only-591766 in cluster download-only-591766
	I0216 16:42:17.620479   14280 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:42:17.621867   14280 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:42:17.621888   14280 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 16:42:17.621938   14280 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:42:17.637613   14280 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:42:17.637722   14280 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:42:17.637741   14280 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 16:42:17.637745   14280 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 16:42:17.637751   14280 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 16:42:17.726183   14280 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 16:42:17.726213   14280 cache.go:56] Caching tarball of preloaded images
	I0216 16:42:17.726341   14280 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 16:42:17.728180   14280 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0216 16:42:17.728213   14280 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:42:17.835726   14280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 16:42:27.342075   14280 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:42:27.342170   14280 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0216 16:42:28.095679   14280 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0216 16:42:28.095994   14280 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/download-only-591766/config.json ...
	I0216 16:42:28.096024   14280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/download-only-591766/config.json: {Name:mk3140916f4e57da942f41d690cf6f9b3306d85c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:28.096210   14280 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 16:42:28.096338   14280 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-591766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-591766
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.23s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-507061 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-507061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-507061
--- PASS: TestDownloadOnlyKic (1.23s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-209162 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-209162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-209162
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (89.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-867549 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-867549 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m27.755312643s)
helpers_test.go:175: Cleaning up "offline-docker-867549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-867549
E0216 17:20:12.009594   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-867549: (2.237421638s)
--- PASS: TestOffline (89.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-500129
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-500129: exit status 85 (76.715275ms)

                                                
                                                
-- stdout --
	* Profile "addons-500129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-500129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-500129
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-500129: exit status 85 (77.137258ms)

                                                
                                                
-- stdout --
	* Profile "addons-500129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-500129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (133.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-500129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-500129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m13.220002563s)
--- PASS: TestAddons/Setup (133.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.90705ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vnkfw" [610c396a-5a1c-4a41-8a22-a7d6a4823fa0] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004692657s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sjtdp" [e787ba01-af9a-4973-8633-58efa26491fd] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005329049s
addons_test.go:340: (dbg) Run:  kubectl --context addons-500129 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-500129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-500129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.503151931s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 ip
2024/02/16 16:45:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-500129 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-500129 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-500129 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [14b32274-f894-483e-b197-8e504160b5e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [14b32274-f894-483e-b197-8e504160b5e8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003327817s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-500129 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-500129 addons disable ingress-dns --alsologtostderr -v=1: (1.242791723s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-500129 addons disable ingress --alsologtostderr -v=1: (7.602817177s)
--- PASS: TestAddons/parallel/Ingress (19.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.61s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-22v4m" [491f0699-720a-49ac-ba3f-4b74ccdcd224] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003521182s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-500129
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-500129: (5.60561552s)
--- PASS: TestAddons/parallel/InspektorGadget (10.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.386402ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-r5j2v" [25e33497-7915-4be7-ba86-ce3175ad0d60] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004589135s
addons_test.go:415: (dbg) Run:  kubectl --context addons-500129 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.751595ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zjvtg" [dc67bfe8-e533-4f48-a959-4ffe22eaa085] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004415688s
addons_test.go:473: (dbg) Run:  kubectl --context addons-500129 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-500129 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.786806982s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.955597ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-500129 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-500129 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64e6cfbd-7bae-4929-97a2-06b3f855139f] Pending
helpers_test.go:344: "task-pv-pod" [64e6cfbd-7bae-4929-97a2-06b3f855139f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64e6cfbd-7bae-4929-97a2-06b3f855139f] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003794351s
addons_test.go:584: (dbg) Run:  kubectl --context addons-500129 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-500129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-500129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-500129 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-500129 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-500129 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-500129 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d2c00004-6b06-4ca0-979f-7127c2bd52c0] Pending
helpers_test.go:344: "task-pv-pod-restore" [d2c00004-6b06-4ca0-979f-7127c2bd52c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d2c00004-6b06-4ca0-979f-7127c2bd52c0] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00378827s
addons_test.go:626: (dbg) Run:  kubectl --context addons-500129 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-500129 delete pod task-pv-pod-restore: (1.091908782s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-500129 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-500129 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-500129 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.515401573s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-500129 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-500129 --alsologtostderr -v=1: (1.292949404s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-r255f" [c9ce96aa-e4cf-4193-b4ed-7514690f9656] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-r255f" [c9ce96aa-e4cf-4193-b4ed-7514690f9656] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003485516s
--- PASS: TestAddons/parallel/Headlamp (16.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7b4754d5d4-k2j8r" [8340b2c4-b6f4-40c7-b301-7139be93d61c] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003099533s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-500129
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-500129 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-500129 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [46d29ddf-3718-431c-955a-6ccaf4f179b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [46d29ddf-3718-431c-955a-6ccaf4f179b5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [46d29ddf-3718-431c-955a-6ccaf4f179b5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003449902s
addons_test.go:891: (dbg) Run:  kubectl --context addons-500129 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 ssh "cat /opt/local-path-provisioner/pvc-d920ebd5-7182-4a2b-8274-286401b6a0bc_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-500129 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-500129 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-500129 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-500129 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.532628024s)
--- PASS: TestAddons/parallel/LocalPath (54.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tc2vd" [b18fdd7e-0013-42ed-9208-995052476d7b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005415987s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-500129
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-5zwwv" [8fe907a5-bfc5-44df-9cf0-4ea4fd3c9f77] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003153891s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-500129 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-500129 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-500129
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-500129: (10.756711245s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-500129
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-500129
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-500129
--- PASS: TestAddons/StoppedEnableDisable (11.03s)

                                                
                                    
x
+
TestCertOptions (29.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-651478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-651478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.931021217s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-651478 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-651478 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-651478 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-651478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-651478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-651478: (2.177770201s)
--- PASS: TestCertOptions (29.73s)

                                                
                                    
x
+
TestCertExpiration (238.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-075018 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-075018 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.768046548s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-075018 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-075018 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.458495262s)
helpers_test.go:175: Cleaning up "cert-expiration-075018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-075018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-075018: (2.036604283s)
--- PASS: TestCertExpiration (238.26s)

                                                
                                    
x
+
TestDockerFlags (28.7s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-336608 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-336608 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.60726339s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-336608 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-336608 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-336608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-336608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-336608: (2.354331451s)
--- PASS: TestDockerFlags (28.70s)

                                                
                                    
x
+
TestForceSystemdFlag (34.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-177389 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-177389 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.450582668s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-177389 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-177389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-177389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-177389: (2.22894515s)
--- PASS: TestForceSystemdFlag (34.01s)

                                                
                                    
x
+
TestForceSystemdEnv (33.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-973090 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-973090 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.959195295s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-973090 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-973090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-973090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-973090: (2.15465903s)
--- PASS: TestForceSystemdEnv (33.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.98s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.98s)

                                                
                                    
x
+
TestErrorSpam/setup (22.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-787549 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-787549 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-787549 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-787549 --driver=docker  --container-runtime=docker: (22.343924611s)
--- PASS: TestErrorSpam/setup (22.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 pause
--- PASS: TestErrorSpam/pause (1.17s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 unpause
--- PASS: TestErrorSpam/unpause (1.24s)

                                                
                                    
x
+
TestErrorSpam/stop (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 stop: (1.682687563s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787549 --log_dir /tmp/nospam-787549 stop
--- PASS: TestErrorSpam/stop (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/test/nested/copy/13619/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-361824 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (42.52486933s)
--- PASS: TestFunctional/serial/StartWithProxy (42.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-361824 --alsologtostderr -v=8: (39.062605671s)
functional_test.go:659: soft start took 39.063265887s for "functional-361824" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-361824 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-361824 /tmp/TestFunctionalserialCacheCmdcacheadd_local2239920623/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache add minikube-local-cache-test:functional-361824
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 cache add minikube-local-cache-test:functional-361824: (1.108103805s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache delete minikube-local-cache-test:functional-361824
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-361824
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.240039ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 kubectl -- --context functional-361824 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-361824 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-361824 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.966559585s)
functional_test.go:757: restart took 38.966694641s for "functional-361824" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-361824 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 logs: (1.017247288s)
--- PASS: TestFunctional/serial/LogsCmd (1.02s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 logs --file /tmp/TestFunctionalserialLogsFileCmd1018103649/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 logs --file /tmp/TestFunctionalserialLogsFileCmd1018103649/001/logs.txt: (1.055456941s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-361824 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-361824
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-361824: exit status 115 (342.83655ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30731 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-361824 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 config get cpus: exit status 14 (79.382566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 config get cpus: exit status 14 (67.217976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-361824 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-361824 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 57239: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-361824 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (253.820518ms)

                                                
                                                
-- stdout --
	* [functional-361824] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:49:43.836504   56681 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:49:43.836642   56681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:49:43.836653   56681 out.go:304] Setting ErrFile to fd 2...
	I0216 16:49:43.836670   56681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:49:43.836972   56681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:49:43.837703   56681 out.go:298] Setting JSON to false
	I0216 16:49:43.839127   56681 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1930,"bootTime":1708100254,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:49:43.839218   56681 start.go:139] virtualization: kvm guest
	I0216 16:49:43.866139   56681 out.go:177] * [functional-361824] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0216 16:49:43.868077   56681 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:49:43.868235   56681 notify.go:220] Checking for updates...
	I0216 16:49:43.870071   56681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:49:43.919852   56681 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:49:43.921500   56681 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:49:43.923135   56681 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 16:49:43.924765   56681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:49:43.926830   56681 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:49:43.927502   56681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:49:43.949666   56681 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:49:43.949808   56681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:49:44.016109   56681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-16 16:49:44.002400963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:49:44.016224   56681 docker.go:295] overlay module found
	I0216 16:49:44.019004   56681 out.go:177] * Using the docker driver based on existing profile
	I0216 16:49:44.020397   56681 start.go:299] selected driver: docker
	I0216 16:49:44.020420   56681 start.go:903] validating driver "docker" against &{Name:functional-361824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-361824 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:49:44.020529   56681 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:49:44.022808   56681 out.go:177] 
	W0216 16:49:44.024403   56681 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0216 16:49:44.025760   56681 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361824 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-361824 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (183.127566ms)

                                                
                                                
-- stdout --
	* [functional-361824] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:49:44.317331   56907 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:49:44.317484   56907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:49:44.317494   56907 out.go:304] Setting ErrFile to fd 2...
	I0216 16:49:44.317499   56907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:49:44.317798   56907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 16:49:44.318382   56907 out.go:298] Setting JSON to false
	I0216 16:49:44.319474   56907 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1931,"bootTime":1708100254,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0216 16:49:44.319536   56907 start.go:139] virtualization: kvm guest
	I0216 16:49:44.321912   56907 out.go:177] * [functional-361824] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0216 16:49:44.323632   56907 notify.go:220] Checking for updates...
	I0216 16:49:44.323637   56907 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:49:44.325393   56907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:49:44.326910   56907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	I0216 16:49:44.328470   56907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	I0216 16:49:44.330029   56907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0216 16:49:44.331508   56907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:49:44.333884   56907 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:49:44.334561   56907 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:49:44.357519   56907 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:49:44.357638   56907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:49:44.421778   56907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-16 16:49:44.408239637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 16:49:44.421908   56907 docker.go:295] overlay module found
	I0216 16:49:44.425193   56907 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0216 16:49:44.428538   56907 start.go:299] selected driver: docker
	I0216 16:49:44.428584   56907 start.go:903] validating driver "docker" against &{Name:functional-361824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-361824 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:49:44.428731   56907 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:49:44.431604   56907 out.go:177] 
	W0216 16:49:44.433512   56907 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0216 16:49:44.435710   56907 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-361824 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-361824 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-vqxld" [99cda428-eeb2-4135-b879-0758d226f519] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-vqxld" [99cda428-eeb2-4135-b879-0758d226f519] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004784987s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31788
functional_test.go:1671: http://192.168.49.2:31788: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-vqxld

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31788
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1a8fa65f-ac90-4e60-a7b6-cf514961e44d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004891512s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-361824 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-361824 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-361824 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-361824 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2dc37322-5d89-4ebb-86f1-3cbb01100890] Pending
helpers_test.go:344: "sp-pod" [2dc37322-5d89-4ebb-86f1-3cbb01100890] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2dc37322-5d89-4ebb-86f1-3cbb01100890] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0622773s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-361824 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-361824 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-361824 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e1ecb4ab-ec11-41e0-99cc-09d1a1823b0a] Pending
helpers_test.go:344: "sp-pod" [e1ecb4ab-ec11-41e0-99cc-09d1a1823b0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e1ecb4ab-ec11-41e0-99cc-09d1a1823b0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003273097s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-361824 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh -n functional-361824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cp functional-361824:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1456492031/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh -n functional-361824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh -n functional-361824 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-361824 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-wrbzb" [e2e8caa3-383e-452e-a541-679d4f29c9c0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2024/02/16 16:49:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-859648c796-wrbzb" [e2e8caa3-383e-452e-a541-679d4f29c9c0] Running
E0216 16:50:12.009356   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.015219   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.025492   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.045769   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.086237   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.167452   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.328533   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:12.649119   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:50:13.289645   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004336973s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;": exit status 1 (118.65103ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0216 16:50:14.570011   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;": exit status 1 (111.637995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0216 16:50:17.130869   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;": exit status 1 (207.61313ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-361824 exec mysql-859648c796-wrbzb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13619/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /etc/test/nested/copy/13619/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13619.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /etc/ssl/certs/13619.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13619.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /usr/share/ca-certificates/13619.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/136192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /etc/ssl/certs/136192.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/136192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /usr/share/ca-certificates/136192.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-361824 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh "sudo systemctl is-active crio": exit status 1 (296.604736ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-361824 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-361824 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6qr5q" [f8a7ad54-27c4-40e1-af9a-8336a12bbea6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6qr5q" [f8a7ad54-27c4-40e1-af9a-8336a12bbea6] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003940756s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 53022: os: process already finished
helpers_test.go:502: unable to terminate pid 52774: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-361824 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3452357d-c22d-4600-85ee-b8c2f0bed2a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3452357d-c22d-4600-85ee-b8c2f0bed2a6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.004014924s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service list -o json
functional_test.go:1490: Took "590.594918ms" to run "out/minikube-linux-amd64 -p functional-361824 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30803
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30803
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdany-port3903862672/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708102179447126742" to /tmp/TestFunctionalparallelMountCmdany-port3903862672/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708102179447126742" to /tmp/TestFunctionalparallelMountCmdany-port3903862672/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708102179447126742" to /tmp/TestFunctionalparallelMountCmdany-port3903862672/001/test-1708102179447126742
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.632136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 16 16:49 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 16 16:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 16 16:49 test-1708102179447126742
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh cat /mount-9p/test-1708102179447126742
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-361824 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6181c237-faff-4d0a-b6fe-1e3aafb35492] Pending
helpers_test.go:344: "busybox-mount" [6181c237-faff-4d0a-b6fe-1e3aafb35492] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6181c237-faff-4d0a-b6fe-1e3aafb35492] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6181c237-faff-4d0a-b6fe-1e3aafb35492] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0565293s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-361824 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdany-port3903862672/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "293.822614ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "57.534815ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "276.34425ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "60.962033ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-361824 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.40.252 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-361824 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361824 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-361824
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-361824
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361824 image ls --format short --alsologtostderr:
I0216 16:50:02.120505   60398 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:02.120741   60398 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.120791   60398 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:02.120805   60398 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.121172   60398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:02.122196   60398 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.122372   60398 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.122975   60398 cli_runner.go:164] Run: docker container inspect functional-361824 --format={{.State.Status}}
I0216 16:50:02.151502   60398 ssh_runner.go:195] Run: systemctl --version
I0216 16:50:02.151602   60398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-361824
I0216 16:50:02.174436   60398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/functional-361824/id_rsa Username:docker}
I0216 16:50:02.273717   60398 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361824 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-361824 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-361824 | a45268a549b2c | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361824 image ls --format table --alsologtostderr:
I0216 16:50:02.393837   60562 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:02.394155   60562 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.394170   60562 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:02.394177   60562 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.395006   60562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:02.397142   60562 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.397620   60562 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.399253   60562 cli_runner.go:164] Run: docker container inspect functional-361824 --format={{.State.Status}}
I0216 16:50:02.439385   60562 ssh_runner.go:195] Run: systemctl --version
I0216 16:50:02.439470   60562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-361824
I0216 16:50:02.462244   60562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/functional-361824/id_rsa Username:docker}
I0216 16:50:02.596914   60562 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361824 image ls --format json --alsologtostderr:
[{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"a45268a549b2c3078c9a9b61bd2f4bec4bd8965213e96f83ff809150bb387c8d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-361824"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000
"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-361824"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa
4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3
.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361824 image ls --format json --alsologtostderr:
I0216 16:50:02.387862   60556 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:02.388116   60556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.388125   60556 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:02.388130   60556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.388364   60556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:02.389026   60556 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.389125   60556 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.389519   60556 cli_runner.go:164] Run: docker container inspect functional-361824 --format={{.State.Status}}
I0216 16:50:02.422046   60556 ssh_runner.go:195] Run: systemctl --version
I0216 16:50:02.422117   60556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-361824
I0216 16:50:02.452587   60556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/functional-361824/id_rsa Username:docker}
I0216 16:50:02.592871   60556 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361824 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-361824
size: "32900000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a45268a549b2c3078c9a9b61bd2f4bec4bd8965213e96f83ff809150bb387c8d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-361824
size: "30"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361824 image ls --format yaml --alsologtostderr:
I0216 16:50:02.136071   60397 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:02.136215   60397 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.136225   60397 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:02.136233   60397 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.136563   60397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:02.137285   60397 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.137416   60397 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.137970   60397 cli_runner.go:164] Run: docker container inspect functional-361824 --format={{.State.Status}}
I0216 16:50:02.158171   60397 ssh_runner.go:195] Run: systemctl --version
I0216 16:50:02.158224   60397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-361824
I0216 16:50:02.176799   60397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/functional-361824/id_rsa Username:docker}
I0216 16:50:02.294277   60397 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh pgrep buildkitd: exit status 1 (360.912839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image build -t localhost/my-image:functional-361824 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image build -t localhost/my-image:functional-361824 testdata/build --alsologtostderr: (4.02013103s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361824 image build -t localhost/my-image:functional-361824 testdata/build --alsologtostderr:
I0216 16:50:02.487893   60600 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:02.488196   60600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.488208   60600 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:02.488216   60600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:02.488451   60600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:02.489940   60600 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.490995   60600 config.go:182] Loaded profile config "functional-361824": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:02.491396   60600 cli_runner.go:164] Run: docker container inspect functional-361824 --format={{.State.Status}}
I0216 16:50:02.513485   60600 ssh_runner.go:195] Run: systemctl --version
I0216 16:50:02.513556   60600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-361824
I0216 16:50:02.533919   60600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/functional-361824/id_rsa Username:docker}
I0216 16:50:02.693673   60600 build_images.go:151] Building image from path: /tmp/build.579376145.tar
I0216 16:50:02.693763   60600 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0216 16:50:02.703947   60600 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.579376145.tar
I0216 16:50:02.707800   60600 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.579376145.tar: stat -c "%s %y" /var/lib/minikube/build/build.579376145.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.579376145.tar': No such file or directory
I0216 16:50:02.707831   60600 ssh_runner.go:362] scp /tmp/build.579376145.tar --> /var/lib/minikube/build/build.579376145.tar (3072 bytes)
I0216 16:50:02.734987   60600 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.579376145
I0216 16:50:02.744946   60600 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.579376145 -xf /var/lib/minikube/build/build.579376145.tar
I0216 16:50:02.799222   60600 docker.go:360] Building image: /var/lib/minikube/build/build.579376145
I0216 16:50:02.799294   60600 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-361824 /var/lib/minikube/build/build.579376145
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:94282214c3aa5594574a7586673e1a98d00a71c33f27ac2955e0185a265f72a5 done
#8 naming to localhost/my-image:functional-361824 done
#8 DONE 0.0s
I0216 16:50:06.392039   60600 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-361824 /var/lib/minikube/build/build.579376145: (3.592717778s)
I0216 16:50:06.392143   60600 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.579376145
I0216 16:50:06.404138   60600 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.579376145.tar
I0216 16:50:06.415527   60600 build_images.go:207] Built localhost/my-image:functional-361824 from /tmp/build.579376145.tar
I0216 16:50:06.415572   60600 build_images.go:123] succeeded building to: functional-361824
I0216 16:50:06.415578   60600 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.986375426s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-361824
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr: (4.567452407s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-361824 docker-env) && out/minikube-linux-amd64 status -p functional-361824"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-361824 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr: (2.811246987s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdspecific-port645377964/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.732853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdspecific-port645377964/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh "sudo umount -f /mount-9p": exit status 1 (319.906165ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-361824 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdspecific-port645377964/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T" /mount1: exit status 1 (378.623954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-361824 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361824 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496016665/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.344909306s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-361824
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image load --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr: (3.13167007s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image save gcr.io/google-containers/addon-resizer:functional-361824 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image save gcr.io/google-containers/addon-resizer:functional-361824 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.266209639s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image rm gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.661239816s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-361824
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-361824 image save --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-361824 image save --daemon gcr.io/google-containers/addon-resizer:functional-361824 --alsologtostderr: (1.961175636s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-361824
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-361824
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-361824
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-361824
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (25s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-657816 --driver=docker  --container-runtime=docker
E0216 16:50:32.491778   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-657816 --driver=docker  --container-runtime=docker: (25.00128699s)
--- PASS: TestImageBuild/serial/Setup (25.00s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-657816
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-657816: (2.430082649s)
--- PASS: TestImageBuild/serial/NormalBuild (2.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-657816
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-657816
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-657816
E0216 16:50:52.972556   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-197894 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-197894 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.075231822s)
--- PASS: TestJSONOutput/start/Command (41.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-197894 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-197894 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-197894 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-197894 --output=json --user=testUser: (10.859112179s)
--- PASS: TestJSONOutput/stop/Command (10.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-607567 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-607567 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.301783ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"efa2ff4b-7270-4175-969b-ed555f0ecca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-607567] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59337d5a-79fb-49a0-89fd-9f140c942875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"3843f358-a6dc-418d-8f92-f1904157b374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c181bf03-3ca8-4cc5-9ed8-46fcb8b48f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig"}}
	{"specversion":"1.0","id":"ffc2ff51-2070-47a7-ac7c-5c37f019157f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube"}}
	{"specversion":"1.0","id":"b16b07c9-dfda-4a4e-9edd-43a0275b4157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ab54519a-8bf1-4f72-90ff-adec7c8a414d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"871507f4-b81c-497f-9c4e-2afbf497c054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-607567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-607567
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-589913 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-589913 --network=: (23.816038416s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-589913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-589913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-589913: (1.998629889s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-287100 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-287100 --network=bridge: (26.610399296s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-287100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-287100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-287100: (1.93687273s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.56s)

                                                
                                    
x
+
TestKicExistingNetwork (25.02s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-058992 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-058992 --network=existing-network: (22.981109436s)
helpers_test.go:175: Cleaning up "existing-network-058992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-058992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-058992: (1.904456768s)
--- PASS: TestKicExistingNetwork (25.02s)

                                                
                                    
x
+
TestKicCustomSubnet (27.06s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-019064 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-019064 --subnet=192.168.60.0/24: (24.992266222s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-019064 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-019064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-019064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-019064: (2.055211969s)
--- PASS: TestKicCustomSubnet (27.06s)

                                                
                                    
x
+
TestKicStaticIP (27.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-914037 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-914037 --static-ip=192.168.200.200: (25.547452759s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-914037 ip
helpers_test.go:175: Cleaning up "static-ip-914037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-914037
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-914037: (2.028412203s)
--- PASS: TestKicStaticIP (27.73s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (56.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-662452 --driver=docker  --container-runtime=docker
E0216 17:04:26.471074   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-662452 --driver=docker  --container-runtime=docker: (25.958023079s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-665064 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-665064 --driver=docker  --container-runtime=docker: (25.137375643s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-662452
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-665064
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-665064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-665064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-665064: (2.056366056s)
helpers_test.go:175: Cleaning up "first-662452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-662452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-662452: (2.111904765s)
--- PASS: TestMinikubeProfile (56.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-305467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-305467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.975114462s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-305467 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-318803 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0216 17:05:12.009358   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-318803 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.743138696s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-318803 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-305467 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-305467 --alsologtostderr -v=5: (1.458042034s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-318803 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-318803
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-318803: (1.180094307s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-318803
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-318803: (7.081058717s)
--- PASS: TestMountStart/serial/RestartStopped (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-318803 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-820591 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0216 17:06:35.058394   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-820591 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m5.01832995s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-820591 -- rollout status deployment/busybox: (3.02876048s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-4n9vx -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-w95hm -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-4n9vx -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-w95hm -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-4n9vx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-w95hm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-4n9vx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-4n9vx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-w95hm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-820591 -- exec busybox-5b5d89c9d6-w95hm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-820591 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-820591 -v 3 --alsologtostderr: (15.013873961s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-820591 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp testdata/cp-test.txt multinode-820591:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile206980578/001/cp-test_multinode-820591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591:/home/docker/cp-test.txt multinode-820591-m02:/home/docker/cp-test_multinode-820591_multinode-820591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test_multinode-820591_multinode-820591-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591:/home/docker/cp-test.txt multinode-820591-m03:/home/docker/cp-test_multinode-820591_multinode-820591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test_multinode-820591_multinode-820591-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp testdata/cp-test.txt multinode-820591-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile206980578/001/cp-test_multinode-820591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m02:/home/docker/cp-test.txt multinode-820591:/home/docker/cp-test_multinode-820591-m02_multinode-820591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test_multinode-820591-m02_multinode-820591.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m02:/home/docker/cp-test.txt multinode-820591-m03:/home/docker/cp-test_multinode-820591-m02_multinode-820591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test_multinode-820591-m02_multinode-820591-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp testdata/cp-test.txt multinode-820591-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile206980578/001/cp-test_multinode-820591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m03:/home/docker/cp-test.txt multinode-820591:/home/docker/cp-test_multinode-820591-m03_multinode-820591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591 "sudo cat /home/docker/cp-test_multinode-820591-m03_multinode-820591.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 cp multinode-820591-m03:/home/docker/cp-test.txt multinode-820591-m02:/home/docker/cp-test_multinode-820591-m03_multinode-820591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 ssh -n multinode-820591-m02 "sudo cat /home/docker/cp-test_multinode-820591-m03_multinode-820591-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-820591 node stop m03: (1.190681984s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-820591 status: exit status 7 (476.883489ms)

                                                
                                                
-- stdout --
	multinode-820591
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-820591-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-820591-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr: exit status 7 (488.293996ms)

                                                
                                                
-- stdout --
	multinode-820591
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-820591-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-820591-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:07:48.414689  140250 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:07:48.414955  140250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:07:48.414964  140250 out.go:304] Setting ErrFile to fd 2...
	I0216 17:07:48.414969  140250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:07:48.415163  140250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:07:48.415382  140250 out.go:298] Setting JSON to false
	I0216 17:07:48.415426  140250 mustload.go:65] Loading cluster: multinode-820591
	I0216 17:07:48.415481  140250 notify.go:220] Checking for updates...
	I0216 17:07:48.416025  140250 config.go:182] Loaded profile config "multinode-820591": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:07:48.416043  140250 status.go:255] checking status of multinode-820591 ...
	I0216 17:07:48.416669  140250 cli_runner.go:164] Run: docker container inspect multinode-820591 --format={{.State.Status}}
	I0216 17:07:48.433769  140250 status.go:330] multinode-820591 host status = "Running" (err=<nil>)
	I0216 17:07:48.433800  140250 host.go:66] Checking if "multinode-820591" exists ...
	I0216 17:07:48.434060  140250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-820591
	I0216 17:07:48.450796  140250 host.go:66] Checking if "multinode-820591" exists ...
	I0216 17:07:48.451066  140250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:07:48.451114  140250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-820591
	I0216 17:07:48.468217  140250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/multinode-820591/id_rsa Username:docker}
	I0216 17:07:48.561499  140250 ssh_runner.go:195] Run: systemctl --version
	I0216 17:07:48.566490  140250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:07:48.577714  140250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:07:48.636726  140250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-16 17:07:48.626889874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0216 17:07:48.637255  140250 kubeconfig.go:92] found "multinode-820591" server: "https://192.168.58.2:8443"
	I0216 17:07:48.637276  140250 api_server.go:166] Checking apiserver status ...
	I0216 17:07:48.637307  140250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:07:48.648286  140250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2323/cgroup
	I0216 17:07:48.657748  140250 api_server.go:182] apiserver freezer: "7:freezer:/docker/8c81ada75a3ed3df1cdbbac1f5726fdd6b500a86ef64388c1c06ec3055715b32/kubepods/burstable/pod0fe75b35e448955e7c951cf76d203cd3/28c5e052e97601680896b1a069be444d7893d3c549781cc3a9408fd3a3e9c38a"
	I0216 17:07:48.657856  140250 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c81ada75a3ed3df1cdbbac1f5726fdd6b500a86ef64388c1c06ec3055715b32/kubepods/burstable/pod0fe75b35e448955e7c951cf76d203cd3/28c5e052e97601680896b1a069be444d7893d3c549781cc3a9408fd3a3e9c38a/freezer.state
	I0216 17:07:48.665768  140250 api_server.go:204] freezer state: "THAWED"
	I0216 17:07:48.665806  140250 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0216 17:07:48.669854  140250 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0216 17:07:48.669878  140250 status.go:421] multinode-820591 apiserver status = Running (err=<nil>)
	I0216 17:07:48.669888  140250 status.go:257] multinode-820591 status: &{Name:multinode-820591 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:07:48.669903  140250 status.go:255] checking status of multinode-820591-m02 ...
	I0216 17:07:48.670164  140250 cli_runner.go:164] Run: docker container inspect multinode-820591-m02 --format={{.State.Status}}
	I0216 17:07:48.686628  140250 status.go:330] multinode-820591-m02 host status = "Running" (err=<nil>)
	I0216 17:07:48.686661  140250 host.go:66] Checking if "multinode-820591-m02" exists ...
	I0216 17:07:48.686968  140250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-820591-m02
	I0216 17:07:48.703871  140250 host.go:66] Checking if "multinode-820591-m02" exists ...
	I0216 17:07:48.704144  140250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:07:48.704228  140250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-820591-m02
	I0216 17:07:48.721013  140250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/multinode-820591-m02/id_rsa Username:docker}
	I0216 17:07:48.813190  140250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:07:48.823670  140250 status.go:257] multinode-820591-m02 status: &{Name:multinode-820591-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:07:48.823709  140250 status.go:255] checking status of multinode-820591-m03 ...
	I0216 17:07:48.823985  140250 cli_runner.go:164] Run: docker container inspect multinode-820591-m03 --format={{.State.Status}}
	I0216 17:07:48.841476  140250 status.go:330] multinode-820591-m03 host status = "Stopped" (err=<nil>)
	I0216 17:07:48.841498  140250 status.go:343] host is not running, skipping remaining checks
	I0216 17:07:48.841504  140250 status.go:257] multinode-820591-m03 status: &{Name:multinode-820591-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-820591 node start m03 --alsologtostderr: (11.003683637s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-820591
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-820591
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-820591: (22.322138816s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-820591 --wait=true -v=8 --alsologtostderr
E0216 17:09:26.469159   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-820591 --wait=true -v=8 --alsologtostderr: (1m34.483707098s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-820591
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-820591 node delete m03: (4.134503059s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 stop
E0216 17:10:12.009524   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-820591 stop: (21.243153184s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-820591 status: exit status 7 (105.969448ms)

                                                
                                                
-- stdout --
	multinode-820591
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-820591-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr: exit status 7 (94.850804ms)

                                                
                                                
-- stdout --
	multinode-820591
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-820591-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:10:23.613442  156936 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:10:23.613578  156936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:10:23.613587  156936 out.go:304] Setting ErrFile to fd 2...
	I0216 17:10:23.613591  156936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:10:23.613794  156936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
	I0216 17:10:23.613983  156936 out.go:298] Setting JSON to false
	I0216 17:10:23.614013  156936 mustload.go:65] Loading cluster: multinode-820591
	I0216 17:10:23.614063  156936 notify.go:220] Checking for updates...
	I0216 17:10:23.614439  156936 config.go:182] Loaded profile config "multinode-820591": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:10:23.614452  156936 status.go:255] checking status of multinode-820591 ...
	I0216 17:10:23.614850  156936 cli_runner.go:164] Run: docker container inspect multinode-820591 --format={{.State.Status}}
	I0216 17:10:23.633712  156936 status.go:330] multinode-820591 host status = "Stopped" (err=<nil>)
	I0216 17:10:23.633744  156936 status.go:343] host is not running, skipping remaining checks
	I0216 17:10:23.633752  156936 status.go:257] multinode-820591 status: &{Name:multinode-820591 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:10:23.633775  156936 status.go:255] checking status of multinode-820591-m02 ...
	I0216 17:10:23.634023  156936 cli_runner.go:164] Run: docker container inspect multinode-820591-m02 --format={{.State.Status}}
	I0216 17:10:23.651579  156936 status.go:330] multinode-820591-m02 host status = "Stopped" (err=<nil>)
	I0216 17:10:23.651622  156936 status.go:343] host is not running, skipping remaining checks
	I0216 17:10:23.651630  156936 status.go:257] multinode-820591-m02 status: &{Name:multinode-820591-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-820591 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0216 17:10:49.520795   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-820591 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.236918388s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-820591 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-820591
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-820591-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-820591-m02 --driver=docker  --container-runtime=docker: exit status 14 (79.181069ms)

                                                
                                                
-- stdout --
	* [multinode-820591-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-820591-m02' is duplicated with machine name 'multinode-820591-m02' in profile 'multinode-820591'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-820591-m03 --driver=docker  --container-runtime=docker
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-820591-m03 --driver=docker  --container-runtime=docker: (24.749884832s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-820591
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-820591: exit status 80 (290.272837ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-820591
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-820591-m03 already exists in multinode-820591-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-820591-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-820591-m03: (2.057001704s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.24s)

                                                
                                    
x
+
TestPreload (182s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m50.661477912s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251910 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-251910 image pull gcr.io/k8s-minikube/busybox: (1.830411091s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-251910
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-251910: (10.727325078s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251910 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0216 17:14:26.468571   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251910 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (56.46206762s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251910 image list
helpers_test.go:175: Cleaning up "test-preload-251910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-251910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-251910: (2.105092605s)
--- PASS: TestPreload (182.00s)

                                                
                                    
x
+
TestScheduledStopUnix (95.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-322384 --memory=2048 --driver=docker  --container-runtime=docker
E0216 17:15:12.010358   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-322384 --memory=2048 --driver=docker  --container-runtime=docker: (22.317302669s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-322384 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-322384 -n scheduled-stop-322384
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-322384 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-322384 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-322384 -n scheduled-stop-322384
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-322384
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-322384 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-322384
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-322384: exit status 7 (76.338168ms)

                                                
                                                
-- stdout --
	scheduled-stop-322384
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-322384 -n scheduled-stop-322384
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-322384 -n scheduled-stop-322384: exit status 7 (76.796645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-322384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-322384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-322384: (1.645927413s)
--- PASS: TestScheduledStopUnix (95.43s)

                                                
                                    
x
+
TestSkaffold (120.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3539758512 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-280012 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-280012 --memory=2600 --driver=docker  --container-runtime=docker: (22.819836519s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3539758512 run --minikube-profile skaffold-280012 --kube-context skaffold-280012 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3539758512 run --minikube-profile skaffold-280012 --kube-context skaffold-280012 --status-check=true --port-forward=false --interactive=false: (1m21.499551335s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-75c7875ddf-t756n" [6d658093-5dbb-4dfe-9404-c449ed9977ee] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003921512s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-68cfc5b5df-7bldv" [543165d0-ef09-44b7-934a-c9574c7a37e1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003428968s
helpers_test.go:175: Cleaning up "skaffold-280012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-280012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-280012: (2.677449038s)
--- PASS: TestSkaffold (120.63s)

                                                
                                    
x
+
TestInsufficientStorage (13.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-667645 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-667645 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.136200749s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba18c5d0-0bcd-43d5-bb5e-a3b458eb2824","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-667645] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ea08170-e674-42ba-88c6-9958cd428ddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"6f2a4f21-21e8-457b-bb77-bfd15712043c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f31267bc-4a4d-41d2-8c0e-6c6a77b8b6c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig"}}
	{"specversion":"1.0","id":"2dc2fb58-d306-43e4-9531-503a59bf6cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube"}}
	{"specversion":"1.0","id":"c41d1ed3-0b12-4b31-a82d-e355710b8251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e99abea-9f37-418c-9246-d0fbde66f16a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03098901-37e4-4445-af91-85cf35b73907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"60253c98-b70c-40e2-85f2-de394266d1d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"300963f1-3688-4bab-b2e0-23d5d64df8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"53606212-fc1e-4a94-9009-ebcc77ec9337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"39dbfd83-5ce7-4754-ae39-cdbc0b2a5fa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-667645 in cluster insufficient-storage-667645","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b041e22c-af78-40f7-9c12-0a3c0a0ed56f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c267a696-9105-4485-914b-cc63f10a7978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b84350e9-854e-46a9-9e1f-91c163e5cc36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-667645 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-667645 --output=json --layout=cluster: exit status 7 (266.498165ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-667645","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-667645","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:18:40.988950  197810 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-667645" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-667645 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-667645 --output=json --layout=cluster: exit status 7 (276.364555ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-667645","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-667645","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:18:41.265573  197900 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-667645" does not appear in /home/jenkins/minikube-integration/17936-6821/kubeconfig
	E0216 17:18:41.275085  197900 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/insufficient-storage-667645/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-667645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-667645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-667645: (1.659046462s)
--- PASS: TestInsufficientStorage (13.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3259862299 start -p running-upgrade-353292 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3259862299 start -p running-upgrade-353292 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m35.531560387s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-353292 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-353292 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.666001813s)
helpers_test.go:175: Cleaning up "running-upgrade-353292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-353292
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-353292: (2.118510691s)
--- PASS: TestRunningBinaryUpgrade (121.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2068096584 start -p missing-upgrade-908834 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2068096584 start -p missing-upgrade-908834 --memory=2200 --driver=docker  --container-runtime=docker: (1m13.888650649s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-908834
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-908834: (10.395960193s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-908834
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-908834 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-908834 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.619718214s)
helpers_test.go:175: Cleaning up "missing-upgrade-908834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-908834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-908834: (2.098743947s)
--- PASS: TestMissingContainerUpgrade (140.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (101.233684ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-930054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930054 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930054 --driver=docker  --container-runtime=docker: (34.739204687s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-930054 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --driver=docker  --container-runtime=docker: (8.216049305s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-930054 status -o json
E0216 17:19:26.469171   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-930054 status -o json: exit status 2 (295.741526ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-930054","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-930054
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-930054: (2.334703635s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930054 --no-kubernetes --driver=docker  --container-runtime=docker: (6.975400299s)
--- PASS: TestNoKubernetes/serial/Start (6.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-930054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-930054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.960104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-930054
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-930054: (1.207480241s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930054 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930054 --driver=docker  --container-runtime=docker: (7.744760636s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-930054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-930054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.735393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1977823841 start -p stopped-upgrade-799493 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1977823841 start -p stopped-upgrade-799493 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.157486924s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1977823841 -p stopped-upgrade-799493 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1977823841 -p stopped-upgrade-799493 stop: (10.809735404s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-799493 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-799493 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.003141284s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.97s)

                                                
                                    
x
+
TestPause/serial/Start (80.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-815202 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-815202 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m20.726914717s)
--- PASS: TestPause/serial/Start (80.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-799493
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-799493: (1.116899621s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0216 17:23:15.058792   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 17:23:15.901533   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:15.906858   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:15.917110   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:15.937513   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:15.977795   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:16.058429   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:16.218572   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:16.539673   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:17.180148   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:18.460564   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m9.75419847s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0216 17:23:21.020737   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:26.141774   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:23:36.382265   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (53.165310383s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-815202 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0216 17:23:56.862634   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-815202 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.20499164s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vdgfx" [c4f7c176-90e6-4c7a-aa56-e8384976b4d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004246948s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x6ldq" [2ddbc1f6-f374-436c-90f1-06890794d55e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x6ldq" [2ddbc1f6-f374-436c-90f1-06890794d55e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003190352s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r9k92" [ea443b81-87b0-42f3-b440-88cba8923efb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r9k92" [ea443b81-87b0-42f3-b440-88cba8923efb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004250049s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-815202 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-815202 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-815202 --output=json --layout=cluster: exit status 2 (302.63488ms)

                                                
                                                
-- stdout --
	{"Name":"pause-815202","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-815202","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-815202 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-815202 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-815202 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-815202 --alsologtostderr -v=5: (2.135160742s)
--- PASS: TestPause/serial/DeletePaused (2.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-815202
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-815202: exit status 1 (17.354568ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-815202: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0216 17:24:37.822933   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m12.230418905s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (54.330431526s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0216 17:25:12.008806   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m20.084460904s)
--- PASS: TestNetworkPlugins/group/false/Start (80.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rwmlw" [b8ae799f-117f-44ea-b947-cef068c8963e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rwmlw" [b8ae799f-117f-44ea-b947-cef068c8963e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003779944s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jg56g" [19bd3374-83a9-4e54-9686-e32b7a4398d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006045648s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s9jqh" [255251f4-245d-4e83-8569-02d3b0f9e9a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s9jqh" [255251f4-245d-4e83-8569-02d3b0f9e9a5] Running
E0216 17:25:59.743732   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004363848s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m19.000291316s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8678v" [9771569f-dfef-4487-ab3b-657ceb576ea6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8678v" [9771569f-dfef-4487-ab3b-657ceb576ea6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003283356s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (55.905133266s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m20.618970913s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c2srt" [2784c5ce-c709-4314-a5d1-5627f0bfdac8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005641823s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-56895" [e6e07060-a599-440e-88ae-aba530d30be8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-56895" [e6e07060-a599-440e-88ae-aba530d30be8] Running
E0216 17:27:29.521893   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003912597s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dnn6t" [fdab809e-7260-41c8-8a89-3f476962c4c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dnn6t" [fdab809e-7260-41c8-8a89-3f476962c4c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003114153s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (42.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-123826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (42.688009874s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (42.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kg644" [e2363a86-3a96-47e5-a1b5-73d82f0a4dfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kg644" [e2363a86-3a96-47e5-a1b5-73d82f0a4dfa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004164243s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-408847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-408847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (1m53.624579162s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-123826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-123826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-skpfc" [f779bbb2-a743-437f-826a-a4385a476c91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-skpfc" [f779bbb2-a743-437f-826a-a4385a476c91] Running
E0216 17:28:43.584324   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004644044s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-123826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-123826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0216 17:38:15.901426   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-162802 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:29:13.711464   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:13.716795   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:13.727088   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:13.747398   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:13.787747   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:13.868445   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:14.028862   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:14.349925   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:14.990773   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:16.271287   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:16.737688   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:16.742991   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:16.753263   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:16.773558   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:16.813892   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:16.894254   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:17.054678   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:17.374999   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:18.015247   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:18.831591   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:19.296377   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:21.856914   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:23.952579   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:26.469122   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 17:29:26.977829   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:34.192939   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:37.218615   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:29:54.673294   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:29:57.698786   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:30:12.009504   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-162802 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m14.828962468s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-162802 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [923259af-12ed-460b-9acf-4e11b97939c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [923259af-12ed-460b-9acf-4e11b97939c8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004002321s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-162802 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-408847 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c72e908f-d8ee-4e2b-9294-46d99e3bee44] Pending
helpers_test.go:344: "busybox" [c72e908f-d8ee-4e2b-9294-46d99e3bee44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c72e908f-d8ee-4e2b-9294-46d99e3bee44] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003377008s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-408847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-162802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-162802 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-162802 --alsologtostderr -v=3
E0216 17:30:35.633649   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-162802 --alsologtostderr -v=3: (10.845687053s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-408847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-408847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-408847 --alsologtostderr -v=3
E0216 17:30:38.659554   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:30:40.857101   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:40.862423   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:40.872689   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:40.893012   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:40.933359   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:41.013701   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:41.174154   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:41.495251   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:42.135767   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:43.416041   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:45.303112   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.308433   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.318819   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.339117   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.379398   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.459783   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-408847 --alsologtostderr -v=3: (10.763785028s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-162802 -n embed-certs-162802
E0216 17:30:45.619903   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-162802 -n embed-certs-162802: exit status 7 (123.987122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-162802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (587.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-162802 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:30:45.940604   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:45.977208   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:46.581430   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:47.861928   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-162802 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (9m47.096194002s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-162802 -n embed-certs-162802
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (587.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408847 -n no-preload-408847
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408847 -n no-preload-408847: exit status 7 (78.4477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-408847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-408847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 17:30:50.422360   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:30:51.097583   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:30:55.542911   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:31:01.338143   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:31:05.783653   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:31:12.841510   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:12.846826   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:12.857195   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:12.877483   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:12.917771   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:12.998110   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:13.159022   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:13.479873   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:14.120534   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:15.401597   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:17.961941   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:21.819004   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:31:23.083116   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:26.264318   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:31:33.323936   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:53.805164   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:31:57.554215   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:32:00.580609   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:32:02.779583   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:32:07.225133   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:32:18.637589   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.642875   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.653195   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.673536   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.713887   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.794205   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:18.954665   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:19.275247   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:19.915958   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:21.196717   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:23.757801   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:28.878901   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:30.193481   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.198782   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.209101   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.229407   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.269685   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.350010   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.510488   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:30.831050   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:31.472051   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:32.752341   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:34.765538   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:32:35.312987   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:39.119572   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:32:40.433773   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:50.674382   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:32:59.600583   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:33:05.515160   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.520490   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.530796   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.551102   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.591442   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.671805   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:05.832263   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:06.152857   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:06.793218   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:08.074025   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:10.635053   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:11.154677   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:33:15.755768   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:15.901287   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/skaffold-280012/client.crt: no such file or directory
E0216 17:33:24.700768   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:33:25.996353   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:33:29.145284   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-408847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m32.553325109s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408847 -n no-preload-408847
E0216 17:36:21.594457   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-816748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:34:13.711574   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
E0216 17:34:16.737469   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
E0216 17:34:18.712314   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:34:26.469000   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 17:34:27.437959   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:34:41.394859   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kindnet-123826/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-816748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (39.266600382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-816748 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cb72642-8ae3-4a37-983e-aa2ea45f1989] Pending
helpers_test.go:344: "busybox" [9cb72642-8ae3-4a37-983e-aa2ea45f1989] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0216 17:34:44.421353   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/auto-123826/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9cb72642-8ae3-4a37-983e-aa2ea45f1989] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004094966s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-816748 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-816748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-816748 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-816748 --alsologtostderr -v=3
E0216 17:34:59.673377   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/kubenet-123826/client.crt: no such file or directory
E0216 17:35:02.481962   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-816748 --alsologtostderr -v=3: (10.645134277s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748: exit status 7 (127.215015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-816748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (558.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-816748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:35:12.009303   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 17:35:14.036785   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
E0216 17:35:40.856437   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:35:45.303379   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
E0216 17:35:49.358551   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
E0216 17:36:08.541810   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
E0216 17:36:12.841207   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
E0216 17:36:12.986142   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-816748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (9m17.826025395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (558.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkmrk" [25275ca1-5aa3-42da-8248-82c6e807119a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkmrk" [25275ca1-5aa3-42da-8248-82c6e807119a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004305998s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkmrk" [25275ca1-5aa3-42da-8248-82c6e807119a] Running
E0216 17:36:40.527693   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/false-123826/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00399171s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-408847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-408847 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-408847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408847 -n no-preload-408847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408847 -n no-preload-408847: exit status 2 (323.109993ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-408847 -n no-preload-408847
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-408847 -n no-preload-408847: exit status 2 (307.991104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-408847 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408847 -n no-preload-408847
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-408847 -n no-preload-408847
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 17:37:18.637756   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (37.691956663s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-398474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-398474 --alsologtostderr -v=3
E0216 17:37:30.193685   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-398474 --alsologtostderr -v=3: (9.682311212s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398474 -n newest-cni-398474
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398474 -n newest-cni-398474: exit status 7 (143.158333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-398474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 17:37:46.322177   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/flannel-123826/client.crt: no such file or directory
E0216 17:37:57.877306   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/enable-default-cni-123826/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (26.901813213s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398474 -n newest-cni-398474
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-398474 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-398474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398474 -n newest-cni-398474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398474 -n newest-cni-398474: exit status 2 (322.117599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398474 -n newest-cni-398474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398474 -n newest-cni-398474: exit status 2 (315.644494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-398474 --alsologtostderr -v=1
E0216 17:38:05.515565   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/bridge-123826/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398474 -n newest-cni-398474
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398474 -n newest-cni-398474
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-478853 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-478853 --alsologtostderr -v=3: (1.20373106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-478853 -n old-k8s-version-478853: exit status 7 (82.457839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-478853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cqbzs" [348c2f18-7137-42a9-a8a1-2907933d9960] Running
E0216 17:40:37.070597   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/no-preload-408847/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003300605s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cqbzs" [348c2f18-7137-42a9-a8a1-2907933d9960] Running
E0216 17:40:40.857523   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/custom-flannel-123826/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004405034s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-162802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-162802 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-162802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-162802 -n embed-certs-162802
E0216 17:40:45.303138   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/calico-123826/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-162802 -n embed-certs-162802: exit status 2 (301.341514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-162802 -n embed-certs-162802
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-162802 -n embed-certs-162802: exit status 2 (310.167803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-162802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-162802 -n embed-certs-162802
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-162802 -n embed-certs-162802
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zmf7p" [aaf4b2ab-5d01-4eac-8bb9-b48fb075bd21] Running
E0216 17:44:26.468823   13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004109625s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zmf7p" [aaf4b2ab-5d01-4eac-8bb9-b48fb075bd21] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004046671s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-816748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-816748 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-816748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748: exit status 2 (299.988811ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748: exit status 2 (299.070229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-816748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-816748 -n default-k8s-diff-port-816748
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.40s)

                                                
                                    

Test skip (23/331)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-123826 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-123826" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-123826

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-123826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123826"

                                                
                                                
----------------------- debugLogs end: cilium-123826 [took: 4.906528901s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-123826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-123826
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-873328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-873328
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard