Test Report: Docker_Linux 18233

                    
                      43c16c6f5515599be6840239a73911c5fca4a3e9:2024-02-23:33262
                    
                

Test fail (8/330)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (511.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0223 00:39:56.049518  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:40:37.010226  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:41:58.932705  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:43:35.082688  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.087985  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.098244  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.118484  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.158766  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.239108  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.399511  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.720115  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:36.361130  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:37.641400  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:40.203189  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:45.323748  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:55.564157  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:44:15.086929  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:44:16.044892  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:44:42.772989  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:44:57.005516  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:46:18.926225  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m31.075513803s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-838368] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node ingress-addon-legacy-838368 in cluster ingress-addon-legacy-838368
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296    5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	  Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663    5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	  Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070    5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:39:49.286234  377758 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:39:49.286523  377758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:39:49.286533  377758 out.go:304] Setting ErrFile to fd 2...
	I0223 00:39:49.286537  377758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:39:49.286763  377758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:39:49.287410  377758 out.go:298] Setting JSON to false
	I0223 00:39:49.288552  377758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4938,"bootTime":1708643851,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:39:49.288621  377758 start.go:139] virtualization: kvm guest
	I0223 00:39:49.290919  377758 out.go:177] * [ingress-addon-legacy-838368] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 00:39:49.292433  377758 notify.go:220] Checking for updates...
	I0223 00:39:49.292464  377758 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 00:39:49.293942  377758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:39:49.295295  377758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:39:49.296534  377758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:39:49.297788  377758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 00:39:49.299009  377758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 00:39:49.300382  377758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:39:49.322464  377758 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:39:49.322647  377758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:39:49.372750  377758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-23 00:39:49.36352106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:39:49.372912  377758 docker.go:295] overlay module found
	I0223 00:39:49.374890  377758 out.go:177] * Using the docker driver based on user configuration
	I0223 00:39:49.376208  377758 start.go:299] selected driver: docker
	I0223 00:39:49.376222  377758 start.go:903] validating driver "docker" against <nil>
	I0223 00:39:49.376234  377758 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 00:39:49.377030  377758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:39:49.428367  377758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-23 00:39:49.420178372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:39:49.428544  377758 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 00:39:49.428753  377758 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 00:39:49.430289  377758 out.go:177] * Using Docker driver with root privileges
	I0223 00:39:49.431632  377758 cni.go:84] Creating CNI manager for ""
	I0223 00:39:49.431664  377758 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 00:39:49.431674  377758 start_flags.go:323] config:
	{Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 00:39:49.433186  377758 out.go:177] * Starting control plane node ingress-addon-legacy-838368 in cluster ingress-addon-legacy-838368
	I0223 00:39:49.434544  377758 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 00:39:49.435753  377758 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 00:39:49.436850  377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 00:39:49.436951  377758 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 00:39:49.452305  377758 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 00:39:49.452328  377758 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 00:39:49.472648  377758 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 00:39:49.472678  377758 cache.go:56] Caching tarball of preloaded images
	I0223 00:39:49.472813  377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 00:39:49.474579  377758 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0223 00:39:49.476055  377758 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 00:39:49.512440  377758 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 00:39:53.574149  377758 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 00:39:53.574260  377758 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 00:39:54.367431  377758 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0223 00:39:54.367852  377758 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json ...
	I0223 00:39:54.367893  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json: {Name:mk3673064d6872c19f71258f1deec8112e0ae3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:39:54.368118  377758 cache.go:194] Successfully downloaded all kic artifacts
	I0223 00:39:54.368148  377758 start.go:365] acquiring machines lock for ingress-addon-legacy-838368: {Name:mk83b6f61dd07162aa4ec11c4e638a0950891881 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 00:39:54.368211  377758 start.go:369] acquired machines lock for "ingress-addon-legacy-838368" in 45.497µs
	I0223 00:39:54.368234  377758 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 00:39:54.368341  377758 start.go:125] createHost starting for "" (driver="docker")
	I0223 00:39:54.370711  377758 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0223 00:39:54.370979  377758 start.go:159] libmachine.API.Create for "ingress-addon-legacy-838368" (driver="docker")
	I0223 00:39:54.371020  377758 client.go:168] LocalClient.Create starting
	I0223 00:39:54.371097  377758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem
	I0223 00:39:54.371143  377758 main.go:141] libmachine: Decoding PEM data...
	I0223 00:39:54.371175  377758 main.go:141] libmachine: Parsing certificate...
	I0223 00:39:54.371246  377758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem
	I0223 00:39:54.371279  377758 main.go:141] libmachine: Decoding PEM data...
	I0223 00:39:54.371298  377758 main.go:141] libmachine: Parsing certificate...
	I0223 00:39:54.371674  377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 00:39:54.387645  377758 cli_runner.go:211] docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 00:39:54.387718  377758 network_create.go:281] running [docker network inspect ingress-addon-legacy-838368] to gather additional debugging logs...
	I0223 00:39:54.387750  377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368
	W0223 00:39:54.402418  377758 cli_runner.go:211] docker network inspect ingress-addon-legacy-838368 returned with exit code 1
	I0223 00:39:54.402460  377758 network_create.go:284] error running [docker network inspect ingress-addon-legacy-838368]: docker network inspect ingress-addon-legacy-838368: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-838368 not found
	I0223 00:39:54.402477  377758 network_create.go:286] output of [docker network inspect ingress-addon-legacy-838368]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-838368 not found
	
	** /stderr **
	I0223 00:39:54.402573  377758 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 00:39:54.417789  377758 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025a3200}
	I0223 00:39:54.417825  377758 network_create.go:124] attempt to create docker network ingress-addon-legacy-838368 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0223 00:39:54.417866  377758 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 ingress-addon-legacy-838368
	I0223 00:39:54.469358  377758 network_create.go:108] docker network ingress-addon-legacy-838368 192.168.49.0/24 created
	I0223 00:39:54.469393  377758 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-838368" container
	I0223 00:39:54.469459  377758 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 00:39:54.483915  377758 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-838368 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --label created_by.minikube.sigs.k8s.io=true
	I0223 00:39:54.499488  377758 oci.go:103] Successfully created a docker volume ingress-addon-legacy-838368
	I0223 00:39:54.499574  377758 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-838368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --entrypoint /usr/bin/test -v ingress-addon-legacy-838368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0223 00:39:56.008915  377758 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-838368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --entrypoint /usr/bin/test -v ingress-addon-legacy-838368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.509294438s)
	I0223 00:39:56.008947  377758 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-838368
	I0223 00:39:56.008966  377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 00:39:56.008993  377758 kic.go:194] Starting extracting preloaded images to volume ...
	I0223 00:39:56.009059  377758 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-838368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 00:40:01.061105  377758 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-838368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (5.052003476s)
	I0223 00:40:01.061148  377758 kic.go:203] duration metric: took 5.052151 seconds to extract preloaded images to volume
	W0223 00:40:01.061312  377758 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 00:40:01.061442  377758 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 00:40:01.113152  377758 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-838368 --name ingress-addon-legacy-838368 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --network ingress-addon-legacy-838368 --ip 192.168.49.2 --volume ingress-addon-legacy-838368:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0223 00:40:01.403865  377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Running}}
	I0223 00:40:01.422210  377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
	I0223 00:40:01.441081  377758 cli_runner.go:164] Run: docker exec ingress-addon-legacy-838368 stat /var/lib/dpkg/alternatives/iptables
	I0223 00:40:01.481363  377758 oci.go:144] the created container "ingress-addon-legacy-838368" has a running status.
	I0223 00:40:01.481407  377758 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa...
	I0223 00:40:01.637161  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 00:40:01.637231  377758 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 00:40:01.656288  377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
	I0223 00:40:01.674505  377758 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 00:40:01.674532  377758 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-838368 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 00:40:01.727668  377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
	I0223 00:40:01.743484  377758 machine.go:88] provisioning docker machine ...
	I0223 00:40:01.743521  377758 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-838368"
	I0223 00:40:01.743579  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:01.759347  377758 main.go:141] libmachine: Using SSH client type: native
	I0223 00:40:01.759616  377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0223 00:40:01.759637  377758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-838368 && echo "ingress-addon-legacy-838368" | sudo tee /etc/hostname
	I0223 00:40:01.760231  377758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43126->127.0.0.1:33102: read: connection reset by peer
	I0223 00:40:04.900481  377758 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-838368
	
	I0223 00:40:04.900589  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:04.916828  377758 main.go:141] libmachine: Using SSH client type: native
	I0223 00:40:04.917064  377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0223 00:40:04.917092  377758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-838368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-838368/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-838368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 00:40:05.046178  377758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 00:40:05.046216  377758 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 00:40:05.046254  377758 ubuntu.go:177] setting up certificates
	I0223 00:40:05.046269  377758 provision.go:83] configureAuth start
	I0223 00:40:05.046351  377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
	I0223 00:40:05.063332  377758 provision.go:138] copyHostCerts
	I0223 00:40:05.063371  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 00:40:05.063403  377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 00:40:05.063412  377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 00:40:05.063480  377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 00:40:05.063551  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 00:40:05.063569  377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 00:40:05.063580  377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 00:40:05.063604  377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 00:40:05.063644  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 00:40:05.063660  377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 00:40:05.063667  377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 00:40:05.063686  377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 00:40:05.063743  377758 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-838368 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-838368]
	I0223 00:40:05.121370  377758 provision.go:172] copyRemoteCerts
	I0223 00:40:05.121440  377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 00:40:05.121496  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:05.137109  377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:40:05.230984  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 00:40:05.231045  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0223 00:40:05.252530  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 00:40:05.252648  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 00:40:05.273506  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 00:40:05.273579  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 00:40:05.294066  377758 provision.go:86] duration metric: configureAuth took 247.76753ms
	I0223 00:40:05.294098  377758 ubuntu.go:193] setting minikube options for container-runtime
	I0223 00:40:05.294253  377758 config.go:182] Loaded profile config "ingress-addon-legacy-838368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 00:40:05.294300  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:05.309896  377758 main.go:141] libmachine: Using SSH client type: native
	I0223 00:40:05.310133  377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0223 00:40:05.310148  377758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 00:40:05.438161  377758 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 00:40:05.438191  377758 ubuntu.go:71] root file system type: overlay
	I0223 00:40:05.438290  377758 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 00:40:05.438349  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:05.454564  377758 main.go:141] libmachine: Using SSH client type: native
	I0223 00:40:05.454772  377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0223 00:40:05.454853  377758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 00:40:05.593285  377758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 00:40:05.593356  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:05.609550  377758 main.go:141] libmachine: Using SSH client type: native
	I0223 00:40:05.609807  377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0223 00:40:05.609835  377758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 00:40:06.275259  377758 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-23 00:40:05.588690843 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 00:40:06.275288  377758 machine.go:91] provisioned docker machine in 4.531782245s
	I0223 00:40:06.275299  377758 client.go:171] LocalClient.Create took 11.904267527s
	I0223 00:40:06.275318  377758 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-838368" took 11.904340434s
	I0223 00:40:06.275328  377758 start.go:300] post-start starting for "ingress-addon-legacy-838368" (driver="docker")
	I0223 00:40:06.275339  377758 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 00:40:06.275403  377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 00:40:06.275445  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:06.291972  377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:40:06.387256  377758 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 00:40:06.390284  377758 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 00:40:06.390314  377758 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 00:40:06.390322  377758 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 00:40:06.390329  377758 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 00:40:06.390354  377758 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 00:40:06.390407  377758 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 00:40:06.390476  377758 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 00:40:06.390489  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> /etc/ssl/certs/3243752.pem
	I0223 00:40:06.390563  377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 00:40:06.397990  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 00:40:06.418759  377758 start.go:303] post-start completed in 143.418294ms
	I0223 00:40:06.419075  377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
	I0223 00:40:06.435524  377758 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json ...
	I0223 00:40:06.435743  377758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 00:40:06.435782  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:06.452817  377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:40:06.542756  377758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 00:40:06.546784  377758 start.go:128] duration metric: createHost completed in 12.178427063s
	I0223 00:40:06.546814  377758 start.go:83] releasing machines lock for "ingress-addon-legacy-838368", held for 12.17859169s
	I0223 00:40:06.546890  377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
	I0223 00:40:06.562586  377758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 00:40:06.562610  377758 ssh_runner.go:195] Run: cat /version.json
	I0223 00:40:06.562674  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:06.562675  377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:40:06.578493  377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:40:06.579466  377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:40:06.665694  377758 ssh_runner.go:195] Run: systemctl --version
	I0223 00:40:06.752850  377758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 00:40:06.757781  377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 00:40:06.780625  377758 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 00:40:06.780723  377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 00:40:06.796237  377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 00:40:06.810879  377758 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 00:40:06.810914  377758 start.go:475] detecting cgroup driver to use...
	I0223 00:40:06.810951  377758 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 00:40:06.811094  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 00:40:06.825040  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0223 00:40:06.833534  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 00:40:06.841812  377758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 00:40:06.841877  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 00:40:06.850328  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 00:40:06.859007  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 00:40:06.867580  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 00:40:06.875909  377758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 00:40:06.883732  377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 00:40:06.892194  377758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 00:40:06.899384  377758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 00:40:06.906432  377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 00:40:06.975196  377758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 00:40:07.060459  377758 start.go:475] detecting cgroup driver to use...
	I0223 00:40:07.060514  377758 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 00:40:07.060575  377758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 00:40:07.072293  377758 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 00:40:07.072377  377758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 00:40:07.083900  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 00:40:07.099495  377758 ssh_runner.go:195] Run: which cri-dockerd
	I0223 00:40:07.102727  377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 00:40:07.111975  377758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 00:40:07.129332  377758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 00:40:07.213617  377758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 00:40:07.312203  377758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 00:40:07.312367  377758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 00:40:07.328280  377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 00:40:07.403680  377758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 00:40:07.634498  377758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 00:40:07.656743  377758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 00:40:07.681832  377758 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0223 00:40:07.681955  377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 00:40:07.698240  377758 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0223 00:40:07.701817  377758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 00:40:07.711712  377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 00:40:07.711761  377758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 00:40:07.728889  377758 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 00:40:07.728915  377758 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0223 00:40:07.728969  377758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 00:40:07.736972  377758 ssh_runner.go:195] Run: which lz4
	I0223 00:40:07.739957  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0223 00:40:07.740035  377758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0223 00:40:07.742967  377758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 00:40:07.742999  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0223 00:40:08.536216  377758 docker.go:649] Took 0.796199 seconds to copy over tarball
	I0223 00:40:08.536289  377758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 00:40:10.521024  377758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.984691525s)
	I0223 00:40:10.521059  377758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 00:40:10.582742  377758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 00:40:10.590580  377758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0223 00:40:10.606534  377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 00:40:10.684602  377758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 00:40:13.285754  377758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.60111495s)
	I0223 00:40:13.285840  377758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 00:40:13.304588  377758 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 00:40:13.304612  377758 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0223 00:40:13.304625  377758 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 00:40:13.306030  377758 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0223 00:40:13.306171  377758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0223 00:40:13.306175  377758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 00:40:13.306192  377758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0223 00:40:13.306213  377758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0223 00:40:13.306240  377758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0223 00:40:13.306275  377758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0223 00:40:13.306036  377758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0223 00:40:13.307074  377758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0223 00:40:13.307176  377758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0223 00:40:13.307235  377758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0223 00:40:13.307254  377758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0223 00:40:13.307278  377758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0223 00:40:13.307365  377758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0223 00:40:13.307371  377758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0223 00:40:13.307397  377758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 00:40:13.486399  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0223 00:40:13.486399  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0223 00:40:13.490042  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0223 00:40:13.504805  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0223 00:40:13.506265  377758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0223 00:40:13.506317  377758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0223 00:40:13.506359  377758 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0223 00:40:13.506366  377758 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0223 00:40:13.506402  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0223 00:40:13.506408  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0223 00:40:13.507586  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0223 00:40:13.507735  377758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0223 00:40:13.507777  377758 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0223 00:40:13.507822  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0223 00:40:13.521410  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0223 00:40:13.523140  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 00:40:13.525064  377758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0223 00:40:13.525124  377758 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0223 00:40:13.525161  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0223 00:40:13.531860  377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0223 00:40:13.574371  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0223 00:40:13.574428  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0223 00:40:13.574485  377758 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0223 00:40:13.574535  377758 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0223 00:40:13.574548  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0223 00:40:13.574586  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0223 00:40:13.587667  377758 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0223 00:40:13.587709  377758 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0223 00:40:13.587754  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0223 00:40:13.587957  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0223 00:40:13.590147  377758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0223 00:40:13.590190  377758 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0223 00:40:13.590231  377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0223 00:40:13.595440  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0223 00:40:13.606338  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0223 00:40:13.608550  377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0223 00:40:13.608595  377758 cache_images.go:92] LoadImages completed in 303.957131ms
	W0223 00:40:13.608652  377758 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0223 00:40:13.608696  377758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 00:40:13.684401  377758 cni.go:84] Creating CNI manager for ""
	I0223 00:40:13.684433  377758 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 00:40:13.684455  377758 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 00:40:13.684474  377758 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-838368 NodeName:ingress-addon-legacy-838368 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 00:40:13.684635  377758 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-838368"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 00:40:13.684706  377758 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-838368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 00:40:13.684761  377758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0223 00:40:13.693307  377758 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 00:40:13.693383  377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 00:40:13.700897  377758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0223 00:40:13.716258  377758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0223 00:40:13.731277  377758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0223 00:40:13.747103  377758 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0223 00:40:13.750096  377758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 00:40:13.759456  377758 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368 for IP: 192.168.49.2
	I0223 00:40:13.759490  377758 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:13.759646  377758 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 00:40:13.759694  377758 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 00:40:13.759761  377758 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key
	I0223 00:40:13.759780  377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt with IP's: []
	I0223 00:40:13.922518  377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt ...
	I0223 00:40:13.922549  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt: {Name:mk682c96244c8a17d35edaa6656fea4a9ab28eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:13.922711  377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key ...
	I0223 00:40:13.922727  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key: {Name:mk8d5c45a56877445bbdb572f958752e97fbd28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:13.922809  377758 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2
	I0223 00:40:13.922824  377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 00:40:14.011628  377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 ...
	I0223 00:40:14.011661  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2: {Name:mkc128ee5384e47c582a935515d634944f05717f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:14.011817  377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2 ...
	I0223 00:40:14.011831  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2: {Name:mk7a17ed5cbd1d362645e189717eddd537e35aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:14.011939  377758 certs.go:337] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt
	I0223 00:40:14.012032  377758 certs.go:341] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key
	I0223 00:40:14.012093  377758 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key
	I0223 00:40:14.012105  377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt with IP's: []
	I0223 00:40:14.224951  377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt ...
	I0223 00:40:14.224983  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt: {Name:mk3df2d6b96d64b2e4eed6dc41ec21f03c3fd6dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:14.225150  377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key ...
	I0223 00:40:14.225164  377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key: {Name:mkfeec4b6e0c772d5957cf898551efc304a32f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 00:40:14.225228  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 00:40:14.225246  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 00:40:14.225259  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 00:40:14.225271  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 00:40:14.225281  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 00:40:14.225291  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 00:40:14.225304  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 00:40:14.225316  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 00:40:14.225371  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 00:40:14.225410  377758 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 00:40:14.225420  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 00:40:14.225444  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 00:40:14.225466  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 00:40:14.225486  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 00:40:14.225527  377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 00:40:14.225559  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem -> /usr/share/ca-certificates/324375.pem
	I0223 00:40:14.225572  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> /usr/share/ca-certificates/3243752.pem
	I0223 00:40:14.225584  377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 00:40:14.226259  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 00:40:14.248993  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 00:40:14.270172  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 00:40:14.291901  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 00:40:14.313264  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 00:40:14.334908  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 00:40:14.356302  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 00:40:14.377543  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 00:40:14.398935  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 00:40:14.420106  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 00:40:14.441205  377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 00:40:14.461748  377758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 00:40:14.476964  377758 ssh_runner.go:195] Run: openssl version
	I0223 00:40:14.481806  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 00:40:14.489779  377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 00:40:14.492773  377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 00:40:14.492832  377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 00:40:14.498825  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 00:40:14.507398  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 00:40:14.516324  377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 00:40:14.519493  377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 00:40:14.519543  377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 00:40:14.526006  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 00:40:14.534234  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 00:40:14.542379  377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 00:40:14.545361  377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 00:40:14.545424  377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 00:40:14.551547  377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 00:40:14.559815  377758 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 00:40:14.562828  377758 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0223 00:40:14.562886  377758 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 00:40:14.563037  377758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 00:40:14.579431  377758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 00:40:14.587724  377758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 00:40:14.596024  377758 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 00:40:14.596075  377758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 00:40:14.604301  377758 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 00:40:14.604365  377758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 00:40:14.647959  377758 kubeadm.go:322] W0223 00:40:14.647464    1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 00:40:14.761393  377758 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 00:40:14.810896  377758 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0223 00:40:14.811187  377758 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 00:40:14.877714  377758 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 00:40:17.626037  377758 kubeadm.go:322] W0223 00:40:17.625718    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 00:40:17.627060  377758 kubeadm.go:322] W0223 00:40:17.626730    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 00:44:17.631768  377758 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 00:44:17.631893  377758 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 00:44:17.634579  377758 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 00:44:17.634632  377758 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 00:44:17.634712  377758 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 00:44:17.634768  377758 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 00:44:17.634815  377758 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 00:44:17.634852  377758 kubeadm.go:322] OS: Linux
	I0223 00:44:17.634896  377758 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 00:44:17.634938  377758 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 00:44:17.634979  377758 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 00:44:17.635026  377758 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 00:44:17.635070  377758 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 00:44:17.635169  377758 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 00:44:17.635282  377758 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 00:44:17.635429  377758 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 00:44:17.635573  377758 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 00:44:17.635672  377758 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 00:44:17.635746  377758 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 00:44:17.635782  377758 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 00:44:17.635834  377758 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 00:44:17.637801  377758 out.go:204]   - Generating certificates and keys ...
	I0223 00:44:17.637891  377758 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 00:44:17.637947  377758 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 00:44:17.638019  377758 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 00:44:17.638109  377758 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 00:44:17.638192  377758 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 00:44:17.638273  377758 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 00:44:17.638327  377758 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 00:44:17.638443  377758 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 00:44:17.638499  377758 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 00:44:17.638610  377758 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 00:44:17.638685  377758 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 00:44:17.638771  377758 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 00:44:17.638811  377758 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 00:44:17.638866  377758 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 00:44:17.638914  377758 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 00:44:17.638970  377758 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 00:44:17.639066  377758 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 00:44:17.639121  377758 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 00:44:17.639196  377758 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 00:44:17.641678  377758 out.go:204]   - Booting up control plane ...
	I0223 00:44:17.641767  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 00:44:17.641849  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 00:44:17.641926  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 00:44:17.642061  377758 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 00:44:17.642263  377758 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 00:44:17.642343  377758 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 00:44:17.642355  377758 kubeadm.go:322] 
	I0223 00:44:17.642389  377758 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 00:44:17.642428  377758 kubeadm.go:322] 		timed out waiting for the condition
	I0223 00:44:17.642434  377758 kubeadm.go:322] 
	I0223 00:44:17.642470  377758 kubeadm.go:322] 	This error is likely caused by:
	I0223 00:44:17.642500  377758 kubeadm.go:322] 		- The kubelet is not running
	I0223 00:44:17.642596  377758 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 00:44:17.642612  377758 kubeadm.go:322] 
	I0223 00:44:17.642702  377758 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 00:44:17.642736  377758 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 00:44:17.642764  377758 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 00:44:17.642770  377758 kubeadm.go:322] 
	I0223 00:44:17.642857  377758 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 00:44:17.642932  377758 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 00:44:17.642944  377758 kubeadm.go:322] 
	I0223 00:44:17.643041  377758 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 00:44:17.643092  377758 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 00:44:17.643161  377758 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 00:44:17.643191  377758 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 00:44:17.643211  377758 kubeadm.go:322] 
	W0223 00:44:17.643419  377758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:40:14.647464    1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:40:17.625718    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:40:17.626730    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:40:14.647464    1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:40:17.625718    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:40:17.626730    1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 00:44:17.643515  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 00:44:18.371475  377758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 00:44:18.381850  377758 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 00:44:18.381910  377758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 00:44:18.389415  377758 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 00:44:18.389458  377758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 00:44:18.431260  377758 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 00:44:18.431344  377758 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 00:44:18.593850  377758 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 00:44:18.593957  377758 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 00:44:18.594043  377758 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 00:44:18.594140  377758 kubeadm.go:322] OS: Linux
	I0223 00:44:18.594245  377758 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 00:44:18.594325  377758 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 00:44:18.594401  377758 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 00:44:18.594479  377758 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 00:44:18.594562  377758 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 00:44:18.594643  377758 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 00:44:18.660997  377758 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 00:44:18.661130  377758 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 00:44:18.661237  377758 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 00:44:18.826855  377758 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 00:44:18.827733  377758 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 00:44:18.827804  377758 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 00:44:18.911167  377758 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 00:44:18.914791  377758 out.go:204]   - Generating certificates and keys ...
	I0223 00:44:18.914898  377758 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 00:44:18.914982  377758 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 00:44:18.915092  377758 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 00:44:18.915183  377758 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 00:44:18.915275  377758 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 00:44:18.915364  377758 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 00:44:18.915482  377758 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 00:44:18.915584  377758 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 00:44:18.915710  377758 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 00:44:18.915816  377758 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 00:44:18.915883  377758 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 00:44:18.915964  377758 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 00:44:19.088516  377758 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 00:44:19.171278  377758 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 00:44:19.325843  377758 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 00:44:19.938889  377758 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 00:44:19.939517  377758 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 00:44:19.941564  377758 out.go:204]   - Booting up control plane ...
	I0223 00:44:19.941655  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 00:44:19.945700  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 00:44:19.946756  377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 00:44:19.947323  377758 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 00:44:19.950391  377758 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 00:44:59.951111  377758 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 00:48:19.972138  377758 kubeadm.go:322] 
	I0223 00:48:19.972248  377758 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 00:48:19.972303  377758 kubeadm.go:322] 		timed out waiting for the condition
	I0223 00:48:19.972313  377758 kubeadm.go:322] 
	I0223 00:48:19.972362  377758 kubeadm.go:322] 	This error is likely caused by:
	I0223 00:48:19.972411  377758 kubeadm.go:322] 		- The kubelet is not running
	I0223 00:48:19.972534  377758 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 00:48:19.972546  377758 kubeadm.go:322] 
	I0223 00:48:19.972664  377758 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 00:48:19.972713  377758 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 00:48:19.972761  377758 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 00:48:19.972770  377758 kubeadm.go:322] 
	I0223 00:48:19.972881  377758 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 00:48:19.972992  377758 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 00:48:19.973002  377758 kubeadm.go:322] 
	I0223 00:48:19.973104  377758 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 00:48:19.973176  377758 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 00:48:19.973277  377758 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 00:48:19.973324  377758 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 00:48:19.973336  377758 kubeadm.go:322] 
	I0223 00:48:19.975369  377758 kubeadm.go:322] W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 00:48:19.975574  377758 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 00:48:19.975715  377758 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0223 00:48:19.975923  377758 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 00:48:19.976046  377758 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 00:48:19.976188  377758 kubeadm.go:322] W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 00:48:19.976337  377758 kubeadm.go:322] W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 00:48:19.976477  377758 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 00:48:19.976572  377758 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 00:48:19.976710  377758 kubeadm.go:406] StartCluster complete in 8m5.413835988s
	I0223 00:48:19.976868  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 00:48:19.994132  377758 logs.go:276] 0 containers: []
	W0223 00:48:19.994156  377758 logs.go:278] No container was found matching "kube-apiserver"
	I0223 00:48:19.994219  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 00:48:20.009930  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.009962  377758 logs.go:278] No container was found matching "etcd"
	I0223 00:48:20.010015  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 00:48:20.026342  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.026372  377758 logs.go:278] No container was found matching "coredns"
	I0223 00:48:20.026428  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 00:48:20.042848  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.042882  377758 logs.go:278] No container was found matching "kube-scheduler"
	I0223 00:48:20.042934  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 00:48:20.060794  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.060824  377758 logs.go:278] No container was found matching "kube-proxy"
	I0223 00:48:20.060872  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 00:48:20.076670  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.076699  377758 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 00:48:20.076747  377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 00:48:20.092844  377758 logs.go:276] 0 containers: []
	W0223 00:48:20.092870  377758 logs.go:278] No container was found matching "kindnet"
	I0223 00:48:20.092887  377758 logs.go:123] Gathering logs for kubelet ...
	I0223 00:48:20.092903  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 00:48:20.114509  377758 logs.go:138] Found kubelet problem: Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296    5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0223 00:48:20.116450  377758 logs.go:138] Found kubelet problem: Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663    5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0223 00:48:20.122035  377758 logs.go:138] Found kubelet problem: Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070    5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0223 00:48:20.125134  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:00 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:00.812549    5752 pod_workers.go:191] Error syncing pod 68d95bb8149ed8a5ab727bf63000f885 ("etcd-ingress-addon-legacy-838368_kube-system(68d95bb8149ed8a5ab727bf63000f885)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0223 00:48:20.128498  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:03 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:03.813330    5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0223 00:48:20.130596  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:05 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:05.812868    5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0223 00:48:20.136687  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:12 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:12.814975    5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0223 00:48:20.137148  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:12 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:12.816105    5752 pod_workers.go:191] Error syncing pod 68d95bb8149ed8a5ab727bf63000f885 ("etcd-ingress-addon-legacy-838368_kube-system(68d95bb8149ed8a5ab727bf63000f885)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0223 00:48:20.140578  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:16 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:16.812804    5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0223 00:48:20.142822  377758 logs.go:138] Found kubelet problem: Feb 23 00:48:18 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:18.814260    5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0223 00:48:20.144011  377758 logs.go:123] Gathering logs for dmesg ...
	I0223 00:48:20.144034  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 00:48:20.173274  377758 logs.go:123] Gathering logs for describe nodes ...
	I0223 00:48:20.173312  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 00:48:20.231093  377758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 00:48:20.231118  377758 logs.go:123] Gathering logs for Docker ...
	I0223 00:48:20.231130  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 00:48:20.249579  377758 logs.go:123] Gathering logs for container status ...
	I0223 00:48:20.249614  377758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0223 00:48:20.285976  377758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 00:48:20.286029  377758 out.go:239] * 
	* 
	W0223 00:48:20.286120  377758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 00:48:20.286146  377758 out.go:239] * 
	* 
	W0223 00:48:20.287449  377758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 00:48:20.290118  377758 out.go:177] X Problems detected in kubelet:
	I0223 00:48:20.291689  377758 out.go:177]   Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296    5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	I0223 00:48:20.293525  377758 out.go:177]   Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663    5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0223 00:48:20.295572  377758 out.go:177]   Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070    5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0223 00:48:20.298693  377758 out.go:177] 
	W0223 00:48:20.300097  377758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 00:44:18.430772    5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 00:44:19.945491    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 00:44:19.946527    5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 00:48:20.300154  377758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 00:48:20.300171  377758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 00:48:20.302020  377758 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (511.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-838368 addons enable ingress --alsologtostderr -v=5
E0223 00:48:35.080253  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:49:02.767270  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:49:15.087455  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-838368 addons enable ingress --alsologtostderr -v=5: signal: killed (1m28.880641863s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:48:20.428175  388876 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:48:20.428456  388876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:48:20.428465  388876 out.go:304] Setting ErrFile to fd 2...
	I0223 00:48:20.428469  388876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:48:20.428636  388876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:48:20.428955  388876 mustload.go:65] Loading cluster: ingress-addon-legacy-838368
	I0223 00:48:20.430297  388876 config.go:182] Loaded profile config "ingress-addon-legacy-838368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 00:48:20.430339  388876 addons.go:597] checking whether the cluster is paused
	I0223 00:48:20.430908  388876 config.go:182] Loaded profile config "ingress-addon-legacy-838368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 00:48:20.430951  388876 host.go:66] Checking if "ingress-addon-legacy-838368" exists ...
	I0223 00:48:20.431649  388876 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
	I0223 00:48:20.448256  388876 ssh_runner.go:195] Run: systemctl --version
	I0223 00:48:20.448314  388876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:48:20.465158  388876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:48:20.558841  388876 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 00:48:20.577854  388876 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 00:48:20.579378  388876 config.go:182] Loaded profile config "ingress-addon-legacy-838368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 00:48:20.579396  388876 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-838368"
	I0223 00:48:20.579404  388876 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-838368"
	I0223 00:48:20.579474  388876 host.go:66] Checking if "ingress-addon-legacy-838368" exists ...
	I0223 00:48:20.579830  388876 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
	I0223 00:48:20.598262  388876 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 00:48:20.599821  388876 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0223 00:48:20.601353  388876 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 00:48:20.602868  388876 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0223 00:48:20.602888  388876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0223 00:48:20.602938  388876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
	I0223 00:48:20.618514  388876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
	I0223 00:48:20.719347  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:20.773489  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:20.773547  388876 retry.go:31] will retry after 181.193689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:20.954894  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:21.006987  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:21.007036  388876 retry.go:31] will retry after 309.236929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:21.316566  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:21.369120  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:21.369156  388876 retry.go:31] will retry after 286.090727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:21.655622  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:21.709575  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:21.709614  388876 retry.go:31] will retry after 482.641724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:22.193316  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:22.245385  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:22.245421  388876 retry.go:31] will retry after 1.415150164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:23.662094  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:23.715374  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:23.715418  388876 retry.go:31] will retry after 1.29232396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:25.008937  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:25.062208  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:25.062238  388876 retry.go:31] will retry after 4.023679218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:29.086210  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:29.138686  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:29.138725  388876 retry.go:31] will retry after 3.886795655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:33.028006  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:33.080814  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:33.080847  388876 retry.go:31] will retry after 7.465884373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:40.547689  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:40.601134  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:40.601173  388876 retry.go:31] will retry after 8.25545488s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:48.859618  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:48:48.912595  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:48:48.912631  388876 retry.go:31] will retry after 15.29601954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:49:04.212368  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:49:04.265533  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:49:04.265582  388876 retry.go:31] will retry after 29.767946401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:49:34.037128  388876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 00:49:34.090514  388876 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 00:49:34.090567  388876 retry.go:31] will retry after 24.22600938s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-838368
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-838368:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421",
	        "Created": "2024-02-23T00:40:01.12726727Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T00:40:01.396381403Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421/hostname",
	        "HostsPath": "/var/lib/docker/containers/761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421/hosts",
	        "LogPath": "/var/lib/docker/containers/761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421/761348e721423d37e8e06892464eadbce73472cec75659164df266aaec7cd421-json.log",
	        "Name": "/ingress-addon-legacy-838368",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-838368:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-838368",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d859fe789ec7cdf1ff31b863b3922db7b04ebde44d1c24e6fd9f04115e2a016f-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d859fe789ec7cdf1ff31b863b3922db7b04ebde44d1c24e6fd9f04115e2a016f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d859fe789ec7cdf1ff31b863b3922db7b04ebde44d1c24e6fd9f04115e2a016f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d859fe789ec7cdf1ff31b863b3922db7b04ebde44d1c24e6fd9f04115e2a016f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-838368",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-838368/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-838368",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-838368",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-838368",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "82208861d5332b93d8da2a2fa49222b91e1d2d62ad3d8eee39a028790b6cf31d",
	            "SandboxKey": "/var/run/docker/netns/82208861d533",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-838368": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "761348e72142",
	                        "ingress-addon-legacy-838368"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "ce738e71190a14e034b750295249d16cd0e97d650d9738864f7ebb72ee89e948",
	                    "EndpointID": "34a02b8e634613c320c9ccf925b58a37bc3054d39b9610e6a42d1b2b4049a4bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-838368",
	                        "761348e72142"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-838368 -n ingress-addon-legacy-838368
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-838368 -n ingress-addon-legacy-838368: exit status 6 (278.879249ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 00:49:49.524744  390293 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-838368" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-838368" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (814.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109 (8m33.078139741s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-849442] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node kubernetes-upgrade-849442 in cluster kubernetes-upgrade-849442
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 23 01:15:54 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:15:54.597351    5682 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-849442_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.599312    5682 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-849442_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.600408    5682 pod_workers.go:191] Error syncing pod ee415ee9af0931b8e7b068297bfe46fe ("etcd-kubernetes-upgrade-849442_kube-system(ee415ee9af0931b8e7b068297bfe46fe)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 01:07:45.216410  538132 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:07:45.216592  538132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:07:45.216604  538132 out.go:304] Setting ErrFile to fd 2...
	I0223 01:07:45.216611  538132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:07:45.216925  538132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:07:45.218486  538132 out.go:298] Setting JSON to false
	I0223 01:07:45.220115  538132 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6614,"bootTime":1708643851,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:07:45.220260  538132 start.go:139] virtualization: kvm guest
	I0223 01:07:45.222704  538132 out.go:177] * [kubernetes-upgrade-849442] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:07:45.223971  538132 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:07:45.223985  538132 notify.go:220] Checking for updates...
	I0223 01:07:45.225366  538132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:07:45.226582  538132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:07:45.227744  538132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:07:45.228922  538132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:07:45.230089  538132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:07:45.232201  538132 config.go:182] Loaded profile config "force-systemd-env-580709": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 01:07:45.232349  538132 config.go:182] Loaded profile config "missing-upgrade-619261": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0223 01:07:45.232482  538132 config.go:182] Loaded profile config "stopped-upgrade-607441": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0223 01:07:45.232621  538132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:07:45.258926  538132 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:07:45.259092  538132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:07:45.347302  538132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:70 SystemTime:2024-02-23 01:07:45.329057754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:07:45.348069  538132 docker.go:295] overlay module found
	I0223 01:07:45.350309  538132 out.go:177] * Using the docker driver based on user configuration
	I0223 01:07:45.351834  538132 start.go:299] selected driver: docker
	I0223 01:07:45.351852  538132 start.go:903] validating driver "docker" against <nil>
	I0223 01:07:45.351883  538132 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:07:45.352929  538132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:07:45.426196  538132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:07:45.413742102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:07:45.426372  538132 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 01:07:45.426562  538132 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 01:07:45.428857  538132 out.go:177] * Using Docker driver with root privileges
	I0223 01:07:45.430133  538132 cni.go:84] Creating CNI manager for ""
	I0223 01:07:45.430162  538132 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:07:45.430171  538132 start_flags.go:323] config:
	{Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:07:45.431619  538132 out.go:177] * Starting control plane node kubernetes-upgrade-849442 in cluster kubernetes-upgrade-849442
	I0223 01:07:45.432827  538132 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:07:45.434203  538132 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:07:45.435592  538132 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:07:45.435628  538132 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 01:07:45.435638  538132 cache.go:56] Caching tarball of preloaded images
	I0223 01:07:45.435694  538132 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:07:45.435714  538132 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:07:45.435721  538132 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 01:07:45.435809  538132 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/config.json ...
	I0223 01:07:45.435833  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/config.json: {Name:mkc59f435c7dc2f7f8fc670dbcbc4618023a9d02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:07:45.453861  538132 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:07:45.453891  538132 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:07:45.453907  538132 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:07:45.453965  538132 start.go:365] acquiring machines lock for kubernetes-upgrade-849442: {Name:mkd66510a1a6624e119fa7f76664596bf0f0ccdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:07:45.454118  538132 start.go:369] acquired machines lock for "kubernetes-upgrade-849442" in 120.241µs
	I0223 01:07:45.454156  538132 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-849442 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 01:07:45.454302  538132 start.go:125] createHost starting for "" (driver="docker")
	I0223 01:07:45.456038  538132 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 01:07:45.456377  538132 start.go:159] libmachine.API.Create for "kubernetes-upgrade-849442" (driver="docker")
	I0223 01:07:45.456419  538132 client.go:168] LocalClient.Create starting
	I0223 01:07:45.456504  538132 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem
	I0223 01:07:45.456544  538132 main.go:141] libmachine: Decoding PEM data...
	I0223 01:07:45.456561  538132 main.go:141] libmachine: Parsing certificate...
	I0223 01:07:45.456632  538132 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem
	I0223 01:07:45.456658  538132 main.go:141] libmachine: Decoding PEM data...
	I0223 01:07:45.456671  538132 main.go:141] libmachine: Parsing certificate...
	I0223 01:07:45.457246  538132 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-849442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 01:07:45.484423  538132 cli_runner.go:211] docker network inspect kubernetes-upgrade-849442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 01:07:45.484517  538132 network_create.go:281] running [docker network inspect kubernetes-upgrade-849442] to gather additional debugging logs...
	I0223 01:07:45.484533  538132 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-849442
	W0223 01:07:45.507394  538132 cli_runner.go:211] docker network inspect kubernetes-upgrade-849442 returned with exit code 1
	I0223 01:07:45.507444  538132 network_create.go:284] error running [docker network inspect kubernetes-upgrade-849442]: docker network inspect kubernetes-upgrade-849442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-849442 not found
	I0223 01:07:45.507462  538132 network_create.go:286] output of [docker network inspect kubernetes-upgrade-849442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-849442 not found
	
	** /stderr **
	I0223 01:07:45.507585  538132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:07:45.531005  538132 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-695ca2766a58 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f4:90:90:f1} reservation:<nil>}
	I0223 01:07:45.531972  538132 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-db2da4de8123 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bf:69:28:2c} reservation:<nil>}
	I0223 01:07:45.532734  538132 network.go:212] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c6b36d126be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d2:7b:f4:7d} reservation:<nil>}
	I0223 01:07:45.533537  538132 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002981ab0}
	I0223 01:07:45.533563  538132 network_create.go:124] attempt to create docker network kubernetes-upgrade-849442 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 01:07:45.533625  538132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-849442 kubernetes-upgrade-849442
	I0223 01:07:45.604325  538132 network_create.go:108] docker network kubernetes-upgrade-849442 192.168.76.0/24 created
	I0223 01:07:45.604369  538132 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-849442" container
	I0223 01:07:45.604447  538132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 01:07:45.626845  538132 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-849442 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-849442 --label created_by.minikube.sigs.k8s.io=true
	I0223 01:07:45.651540  538132 oci.go:103] Successfully created a docker volume kubernetes-upgrade-849442
	I0223 01:07:45.651636  538132 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-849442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-849442 --entrypoint /usr/bin/test -v kubernetes-upgrade-849442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0223 01:07:46.251877  538132 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-849442
	I0223 01:07:46.251929  538132 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:07:46.251954  538132 kic.go:194] Starting extracting preloaded images to volume ...
	I0223 01:07:46.252047  538132 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-849442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 01:07:53.776986  538132 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-849442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (7.524834461s)
	I0223 01:07:53.777033  538132 kic.go:203] duration metric: took 7.525076 seconds to extract preloaded images to volume
	W0223 01:07:53.777208  538132 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 01:07:53.777377  538132 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 01:07:53.834240  538132 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-849442 --name kubernetes-upgrade-849442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-849442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-849442 --network kubernetes-upgrade-849442 --ip 192.168.76.2 --volume kubernetes-upgrade-849442:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0223 01:07:54.273166  538132 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Running}}
	I0223 01:07:54.302879  538132 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:07:54.329640  538132 cli_runner.go:164] Run: docker exec kubernetes-upgrade-849442 stat /var/lib/dpkg/alternatives/iptables
	I0223 01:07:54.389940  538132 oci.go:144] the created container "kubernetes-upgrade-849442" has a running status.
	I0223 01:07:54.389970  538132 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa...
	I0223 01:07:54.479852  538132 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 01:07:54.527834  538132 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:07:54.571072  538132 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 01:07:54.571095  538132 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-849442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 01:07:54.639533  538132 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:07:54.676260  538132 machine.go:88] provisioning docker machine ...
	I0223 01:07:54.676303  538132 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-849442"
	I0223 01:07:54.676376  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:54.718568  538132 main.go:141] libmachine: Using SSH client type: native
	I0223 01:07:54.718768  538132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0223 01:07:54.718781  538132 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-849442 && echo "kubernetes-upgrade-849442" | sudo tee /etc/hostname
	I0223 01:07:54.719476  538132 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44936->127.0.0.1:33282: read: connection reset by peer
	I0223 01:07:57.924804  538132 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-849442
	
	I0223 01:07:57.924885  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:57.942583  538132 main.go:141] libmachine: Using SSH client type: native
	I0223 01:07:57.942819  538132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0223 01:07:57.942847  538132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-849442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-849442/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-849442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:07:58.074346  538132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:07:58.074390  538132 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:07:58.074426  538132 ubuntu.go:177] setting up certificates
	I0223 01:07:58.074439  538132 provision.go:83] configureAuth start
	I0223 01:07:58.074514  538132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-849442
	I0223 01:07:58.091221  538132 provision.go:138] copyHostCerts
	I0223 01:07:58.091285  538132 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:07:58.091294  538132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:07:58.091341  538132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:07:58.091422  538132 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:07:58.091431  538132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:07:58.091448  538132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:07:58.091507  538132 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:07:58.091516  538132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:07:58.091533  538132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:07:58.091583  538132 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-849442 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-849442]
	I0223 01:07:58.302724  538132 provision.go:172] copyRemoteCerts
	I0223 01:07:58.302793  538132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:07:58.302840  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:58.319851  538132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:07:58.414851  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:07:58.436877  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 01:07:58.458951  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:07:58.480745  538132 provision.go:86] duration metric: configureAuth took 406.285206ms
	I0223 01:07:58.480782  538132 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:07:58.480953  538132 config.go:182] Loaded profile config "kubernetes-upgrade-849442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:07:58.481032  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:58.501913  538132 main.go:141] libmachine: Using SSH client type: native
	I0223 01:07:58.502249  538132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0223 01:07:58.502269  538132 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:07:58.642634  538132 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:07:58.642660  538132 ubuntu.go:71] root file system type: overlay
	I0223 01:07:58.642758  538132 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:07:58.643128  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:58.664278  538132 main.go:141] libmachine: Using SSH client type: native
	I0223 01:07:58.664512  538132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0223 01:07:58.664608  538132 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:07:58.889248  538132 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:07:58.889335  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:58.912459  538132 main.go:141] libmachine: Using SSH client type: native
	I0223 01:07:58.912705  538132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0223 01:07:58.912733  538132 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:07:59.824904  538132 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-23 01:07:58.879204453 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 01:07:59.824946  538132 machine.go:91] provisioned docker machine in 5.148656245s
	I0223 01:07:59.824961  538132 client.go:171] LocalClient.Create took 14.368533916s
	I0223 01:07:59.824979  538132 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-849442" took 14.368603556s
	I0223 01:07:59.824990  538132 start.go:300] post-start starting for "kubernetes-upgrade-849442" (driver="docker")
	I0223 01:07:59.825005  538132 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:07:59.825080  538132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:07:59.825132  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:07:59.857452  538132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:07:59.959108  538132 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:07:59.962349  538132 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:07:59.962403  538132 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:07:59.962420  538132 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:07:59.962432  538132 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:07:59.962449  538132 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:07:59.962510  538132 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:07:59.962605  538132 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:07:59.962711  538132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:07:59.971805  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:07:59.998634  538132 start.go:303] post-start completed in 173.626834ms
	I0223 01:07:59.999024  538132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-849442
	I0223 01:08:00.018608  538132 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/config.json ...
	I0223 01:08:00.018892  538132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:08:00.018953  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:08:00.041828  538132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:08:00.135536  538132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:08:00.140221  538132 start.go:128] duration metric: createHost completed in 14.685901476s
	I0223 01:08:00.140245  538132 start.go:83] releasing machines lock for "kubernetes-upgrade-849442", held for 14.686108391s
	I0223 01:08:00.140301  538132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-849442
	I0223 01:08:00.160855  538132 ssh_runner.go:195] Run: cat /version.json
	I0223 01:08:00.160904  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:08:00.160952  538132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:08:00.161013  538132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:08:00.179669  538132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:08:00.183016  538132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:08:00.374304  538132 ssh_runner.go:195] Run: systemctl --version
	I0223 01:08:00.378611  538132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 01:08:00.383094  538132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 01:08:00.433483  538132 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 01:08:00.433574  538132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:08:00.449595  538132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:08:00.467599  538132 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 01:08:00.467650  538132 start.go:475] detecting cgroup driver to use...
	I0223 01:08:00.467689  538132 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:08:00.467896  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:08:00.485237  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 01:08:00.495864  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:08:00.507263  538132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:08:00.507335  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:08:00.517785  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:08:00.535130  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:08:00.547476  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:08:00.558560  538132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:08:00.567819  538132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:08:00.579669  538132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:08:00.588369  538132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:08:00.596333  538132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:08:00.685999  538132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:08:00.794779  538132 start.go:475] detecting cgroup driver to use...
	I0223 01:08:00.794868  538132 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:08:00.794956  538132 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:08:00.812242  538132 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:08:00.812368  538132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:08:00.825238  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:08:00.850172  538132 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:08:00.854443  538132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:08:00.865475  538132 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:08:00.888983  538132 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:08:00.965556  538132 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:08:01.052648  538132 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:08:01.052801  538132 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:08:01.069623  538132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:08:01.168808  538132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:08:03.659608  538132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.490761751s)
	I0223 01:08:03.659676  538132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:08:03.701921  538132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:08:03.737991  538132 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0223 01:08:03.738152  538132 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-849442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:08:03.768996  538132 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0223 01:08:03.773083  538132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:08:03.784513  538132 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:08:03.784590  538132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:08:03.808221  538132 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:08:03.808240  538132 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:08:03.808276  538132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:08:03.822460  538132 ssh_runner.go:195] Run: which lz4
	I0223 01:08:03.825553  538132 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0223 01:08:03.828356  538132 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 01:08:03.828382  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0223 01:08:05.184813  538132 docker.go:649] Took 1.359285 seconds to copy over tarball
	I0223 01:08:05.184921  538132 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 01:08:07.456376  538132 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.271363693s)
	I0223 01:08:07.456422  538132 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 01:08:07.523195  538132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:08:07.532204  538132 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0223 01:08:07.550926  538132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:08:07.636674  538132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:08:08.879416  538132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.242701694s)
	I0223 01:08:08.879514  538132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:08:08.907077  538132 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:08:08.907105  538132 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:08:08.907117  538132 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 01:08:08.908750  538132 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:08:08.908818  538132 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:08:08.909004  538132 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0223 01:08:08.909025  538132 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:08:08.909035  538132 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0223 01:08:08.909094  538132 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:08:08.909004  538132 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:08:08.909234  538132 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:08:08.909692  538132 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:08:08.909773  538132 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:08:08.910316  538132 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:08:08.910324  538132 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:08:08.910346  538132 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:08:08.910369  538132 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0223 01:08:08.910397  538132 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:08:08.910455  538132 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0223 01:08:09.079712  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0223 01:08:09.099206  538132 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0223 01:08:09.099254  538132 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0223 01:08:09.099292  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0223 01:08:09.111688  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:08:09.113226  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0223 01:08:09.114828  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:08:09.119664  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0223 01:08:09.135830  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:08:09.138662  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:08:09.139352  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:08:09.139527  538132 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0223 01:08:09.139571  538132 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:08:09.139628  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:08:09.139707  538132 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0223 01:08:09.139731  538132 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:08:09.139763  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0223 01:08:09.189225  538132 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0223 01:08:09.189275  538132 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:08:09.189317  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:08:09.189401  538132 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0223 01:08:09.189426  538132 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:08:09.189452  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:08:09.189523  538132 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0223 01:08:09.189544  538132 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:08:09.189573  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:08:09.189635  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0223 01:08:09.192309  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0223 01:08:09.208318  538132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0223 01:08:09.228418  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0223 01:08:09.228522  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0223 01:08:09.228820  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0223 01:08:09.233208  538132 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0223 01:08:09.233250  538132 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0223 01:08:09.233289  538132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0223 01:08:09.256002  538132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0223 01:08:09.256074  538132 cache_images.go:92] LoadImages completed in 348.940587ms
	W0223 01:08:09.256162  538132 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0223 01:08:09.256224  538132 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:08:09.311762  538132 cni.go:84] Creating CNI manager for ""
	I0223 01:08:09.311790  538132 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:08:09.311809  538132 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:08:09.311833  538132 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-849442 NodeName:kubernetes-upgrade-849442 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 01:08:09.312062  538132 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-849442"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-849442
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:08:09.312181  538132 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-849442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:08:09.312256  538132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 01:08:09.323895  538132 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:08:09.323984  538132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:08:09.334715  538132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0223 01:08:09.355792  538132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 01:08:09.376308  538132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0223 01:08:09.393554  538132 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:08:09.397166  538132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:08:09.409460  538132 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442 for IP: 192.168.76.2
	I0223 01:08:09.409497  538132 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.409655  538132 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:08:09.409748  538132 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:08:09.409813  538132 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key
	I0223 01:08:09.409829  538132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt with IP's: []
	I0223 01:08:09.543665  538132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt ...
	I0223 01:08:09.543695  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt: {Name:mk4499f6c3785f3c5cf7ee53f207e84344213ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.543855  538132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key ...
	I0223 01:08:09.543878  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key: {Name:mkf3396550a3de4bb814cf1c697936db2aae1a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.543976  538132 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key.31bdca25
	I0223 01:08:09.544013  538132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 01:08:09.625286  538132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt.31bdca25 ...
	I0223 01:08:09.625328  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt.31bdca25: {Name:mk56c81bb298871fa2be40b87d04d9165d31ddbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.625526  538132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key.31bdca25 ...
	I0223 01:08:09.625548  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key.31bdca25: {Name:mk1065ad976e5c3c907b5eab313ca2cc6f2f14ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.625660  538132 certs.go:337] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt
	I0223 01:08:09.625756  538132 certs.go:341] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key
	I0223 01:08:09.625843  538132 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key
	I0223 01:08:09.625864  538132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.crt with IP's: []
	I0223 01:08:09.775158  538132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.crt ...
	I0223 01:08:09.775196  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.crt: {Name:mkba80da2ed88357e20f2d2d1d272526ca72c968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.775370  538132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key ...
	I0223 01:08:09.775388  538132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key: {Name:mk64cfa5c628cd73447827578f0d60fb5bd31849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:08:09.775585  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:08:09.775623  538132 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:08:09.775635  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:08:09.775656  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:08:09.775677  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:08:09.775712  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:08:09.775751  538132 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:08:09.776379  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:08:09.804714  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:08:09.834201  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:08:09.868542  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 01:08:09.900386  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:08:09.934488  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:08:09.962834  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:08:09.993646  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:08:10.028185  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:08:10.063598  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:08:10.098031  538132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:08:10.130164  538132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:08:10.154620  538132 ssh_runner.go:195] Run: openssl version
	I0223 01:08:10.162577  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:08:10.176257  538132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:08:10.180962  538132 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:08:10.181025  538132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:08:10.189217  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:08:10.202172  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:08:10.226774  538132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:08:10.238550  538132 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:08:10.238623  538132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:08:10.256929  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:08:10.267033  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:08:10.277749  538132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:08:10.283138  538132 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:08:10.283214  538132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:08:10.291866  538132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:08:10.305898  538132 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:08:10.311317  538132 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0223 01:08:10.311368  538132 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:08:10.311527  538132 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:08:10.335094  538132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:08:10.346416  538132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:08:10.356967  538132 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:08:10.357038  538132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:08:10.369012  538132 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:08:10.369070  538132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:08:10.483387  538132 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:08:10.483652  538132 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:08:10.784100  538132 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:08:10.784185  538132 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:08:10.784244  538132 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:08:10.784283  538132 kubeadm.go:322] OS: Linux
	I0223 01:08:10.784352  538132 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:08:10.784414  538132 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:08:10.784473  538132 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:08:10.784540  538132 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:08:10.784601  538132 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:08:10.784656  538132 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:08:10.874598  538132 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:08:10.874751  538132 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:08:10.874926  538132 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:08:11.180076  538132 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:08:11.181292  538132 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:08:11.192895  538132 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:08:11.325231  538132 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:08:11.327908  538132 out.go:204]   - Generating certificates and keys ...
	I0223 01:08:11.328013  538132 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:08:11.328113  538132 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:08:11.493645  538132 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 01:08:11.954581  538132 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 01:08:12.231962  538132 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 01:08:12.533282  538132 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 01:08:12.769194  538132 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 01:08:12.769402  538132 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 01:08:13.061195  538132 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 01:08:13.061343  538132 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 01:08:13.123858  538132 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 01:08:13.472966  538132 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 01:08:13.583231  538132 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 01:08:13.583378  538132 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:08:13.786991  538132 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:08:14.069091  538132 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:08:14.334015  538132 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:08:14.567794  538132 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:08:14.569113  538132 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:08:14.586615  538132 out.go:204]   - Booting up control plane ...
	I0223 01:08:14.586760  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:08:14.586906  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:08:14.587953  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:08:14.612863  538132 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:08:14.615946  538132 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:08:54.616264  538132 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:12:14.617070  538132 kubeadm.go:322] 
	I0223 01:12:14.617168  538132 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:12:14.617247  538132 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:12:14.617259  538132 kubeadm.go:322] 
	I0223 01:12:14.617301  538132 kubeadm.go:322] This error is likely caused by:
	I0223 01:12:14.617375  538132 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:12:14.617574  538132 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:12:14.617605  538132 kubeadm.go:322] 
	I0223 01:12:14.617744  538132 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:12:14.617789  538132 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:12:14.617834  538132 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:12:14.617843  538132 kubeadm.go:322] 
	I0223 01:12:14.617989  538132 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:12:14.618107  538132 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:12:14.618193  538132 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:12:14.618255  538132 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:12:14.618352  538132 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:12:14.618406  538132 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:12:14.621052  538132 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:12:14.621229  538132 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:12:14.621458  538132 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:12:14.621587  538132 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:12:14.621697  538132 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:12:14.621785  538132 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 01:12:14.621977  538132 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-849442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 01:12:14.622040  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:12:15.911466  538132 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.289358218s)
	I0223 01:12:15.911554  538132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:12:15.925517  538132 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:12:15.925588  538132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:12:15.935869  538132 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:12:15.935917  538132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:12:15.992010  538132 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:12:15.992117  538132 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:12:16.184943  538132 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:12:16.185060  538132 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:12:16.185155  538132 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:12:16.185203  538132 kubeadm.go:322] OS: Linux
	I0223 01:12:16.185267  538132 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:12:16.185336  538132 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:12:16.185401  538132 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:12:16.185466  538132 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:12:16.185514  538132 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:12:16.185551  538132 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:12:16.266683  538132 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:12:16.266794  538132 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:12:16.266912  538132 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:12:16.477008  538132 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:12:16.479781  538132 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:12:16.490606  538132 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:12:16.584704  538132 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:12:16.675355  538132 out.go:204]   - Generating certificates and keys ...
	I0223 01:12:16.675485  538132 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:12:16.675606  538132 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:12:16.675735  538132 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:12:16.675856  538132 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:12:16.675960  538132 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:12:16.676045  538132 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:12:16.676142  538132 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:12:16.676250  538132 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:12:16.676385  538132 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:12:16.676500  538132 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:12:16.676560  538132 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:12:16.676649  538132 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:12:16.869235  538132 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:12:17.099075  538132 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:12:17.284479  538132 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:12:17.510515  538132 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:12:17.510620  538132 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:12:17.512505  538132 out.go:204]   - Booting up control plane ...
	I0223 01:12:17.512626  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:12:17.516483  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:12:17.517632  538132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:12:17.518674  538132 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:12:17.521039  538132 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:12:57.521661  538132 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:16:17.522313  538132 kubeadm.go:322] 
	I0223 01:16:17.522416  538132 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:16:17.522468  538132 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:16:17.522504  538132 kubeadm.go:322] 
	I0223 01:16:17.522547  538132 kubeadm.go:322] This error is likely caused by:
	I0223 01:16:17.522585  538132 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:16:17.522675  538132 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:16:17.522684  538132 kubeadm.go:322] 
	I0223 01:16:17.522766  538132 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:16:17.522799  538132 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:16:17.522827  538132 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:16:17.522833  538132 kubeadm.go:322] 
	I0223 01:16:17.523006  538132 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:16:17.523144  538132 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:16:17.523263  538132 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:16:17.523332  538132 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:16:17.523418  538132 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:16:17.523456  538132 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:16:17.525593  538132 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:16:17.525782  538132 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:16:17.526201  538132 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:16:17.526365  538132 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:16:17.526466  538132 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:16:17.526594  538132 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 01:16:17.526703  538132 kubeadm.go:406] StartCluster complete in 8m7.215320803s
	I0223 01:16:17.526790  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:16:17.547377  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.547410  538132 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:16:17.547469  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:16:17.564854  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.564883  538132 logs.go:278] No container was found matching "etcd"
	I0223 01:16:17.564946  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:16:17.585519  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.585557  538132 logs.go:278] No container was found matching "coredns"
	I0223 01:16:17.585615  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:16:17.606719  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.606758  538132 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:16:17.606816  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:16:17.624128  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.624161  538132 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:16:17.624227  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:16:17.641729  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.641762  538132 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:16:17.641844  538132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:16:17.662254  538132 logs.go:276] 0 containers: []
	W0223 01:16:17.662278  538132 logs.go:278] No container was found matching "kindnet"
	I0223 01:16:17.662288  538132 logs.go:123] Gathering logs for kubelet ...
	I0223 01:16:17.662301  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:16:17.684789  538132 logs.go:138] Found kubelet problem: Feb 23 01:15:54 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:15:54.597351    5682 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-849442_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:16:17.700066  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.599312    5682 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-849442_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:16:17.700474  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.600408    5682 pod_workers.go:191] Error syncing pod ee415ee9af0931b8e7b068297bfe46fe ("etcd-kubernetes-upgrade-849442_kube-system(ee415ee9af0931b8e7b068297bfe46fe)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:16:17.705789  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:06 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:06.598144    5682 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-849442_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:16:17.708094  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:07 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:07.597973    5682 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-849442_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:16:17.721586  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:15 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:15.597317    5682 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-849442_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:16:17.725194  538132 logs.go:138] Found kubelet problem: Feb 23 01:16:17 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:17.601025    5682 pod_workers.go:191] Error syncing pod ee415ee9af0931b8e7b068297bfe46fe ("etcd-kubernetes-upgrade-849442_kube-system(ee415ee9af0931b8e7b068297bfe46fe)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:16:17.725213  538132 logs.go:123] Gathering logs for dmesg ...
	I0223 01:16:17.725229  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:16:17.753648  538132 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:16:17.753688  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:16:17.812635  538132 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:16:17.812662  538132 logs.go:123] Gathering logs for Docker ...
	I0223 01:16:17.812677  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:16:17.831705  538132 logs.go:123] Gathering logs for container status ...
	I0223 01:16:17.831738  538132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0223 01:16:17.868310  538132 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 01:16:17.868347  538132 out.go:239] * 
	* 
	W0223 01:16:17.868397  538132 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:16:17.868418  538132 out.go:239] * 
	* 
	W0223 01:16:17.869272  538132 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 01:16:17.934756  538132 out.go:177] X Problems detected in kubelet:
	I0223 01:16:17.936575  538132 out.go:177]   Feb 23 01:15:54 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:15:54.597351    5682 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-849442_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:16:17.938389  538132 out.go:177]   Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.599312    5682 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-849442_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:16:17.940354  538132 out.go:177]   Feb 23 01:16:03 kubernetes-upgrade-849442 kubelet[5682]: E0223 01:16:03.600408    5682 pod_workers.go:191] Error syncing pod ee415ee9af0931b8e7b068297bfe46fe ("etcd-kubernetes-upgrade-849442_kube-system(ee415ee9af0931b8e7b068297bfe46fe)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:16:17.944963  538132 out.go:177] 
	W0223 01:16:18.023273  538132 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:16:18.023346  538132 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 01:16:18.023394  538132 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 01:16:18.093682  538132 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-849442
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-849442: (1.512615295s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-849442 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-849442 status --format={{.Host}}: exit status 7 (89.632574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.807982378s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-849442 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
E0223 01:20:48.800207  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (84.167462ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-849442] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-849442
	    minikube start -p kubernetes-upgrade-849442 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8494422 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-849442 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0223 01:20:51.360434  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:56.480727  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:57.402991  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:21:02.377798  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:21:05.270816  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:21:06.721317  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-849442 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.334190834s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-23 01:21:16.148129757 +0000 UTC m=+2974.332781334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-849442
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-849442:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d",
	        "Created": "2024-02-23T01:07:53.85072547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:16:20.365137283Z",
	            "FinishedAt": "2024-02-23T01:16:18.970156292Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/hosts",
	        "LogPath": "/var/lib/docker/containers/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d-json.log",
	        "Name": "/kubernetes-upgrade-849442",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-849442:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-849442",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df3da4d019719581d9ca6398048c1bc976e3e5d2ba38a81de77dc71d9e883369-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df3da4d019719581d9ca6398048c1bc976e3e5d2ba38a81de77dc71d9e883369/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df3da4d019719581d9ca6398048c1bc976e3e5d2ba38a81de77dc71d9e883369/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df3da4d019719581d9ca6398048c1bc976e3e5d2ba38a81de77dc71d9e883369/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-849442",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-849442/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-849442",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-849442",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-849442",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62e1f9c8c98763924593ff90a2077601137538ac6b89afecdc947f38512aea79",
	            "SandboxKey": "/var/run/docker/netns/62e1f9c8c987",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-849442": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e8c08b5ebc65",
	                        "kubernetes-upgrade-849442"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fddd82e2b02301480844b3742f940f1b88faec4342956865d3fa0eb434202ef7",
	                    "EndpointID": "6ed05cbcc8297eee90e9f59863cb9c16d7fdd44c21310a34ca145540eaac6bd3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-849442",
	                        "e8c08b5ebc65"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-849442 -n kubernetes-upgrade-849442
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-849442 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-600346 sudo cat                             | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | cri-dockerd --version                                  |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | systemctl status containerd                            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | systemctl cat containerd                               |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-157588             | no-preload-157588         | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo cat                             | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | /lib/systemd/system/containerd.service                 |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo cat                             | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | containerd config dump                                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-157588                                   | no-preload-157588         | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC |                     |
	|         | systemctl status crio --all                            |                           |         |         |                     |                     |
	|         | --full --no-pager                                      |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo                                 | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | systemctl cat crio --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo find                            | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p kubenet-600346 sudo crio                            | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p kubenet-600346                                      | kubenet-600346            | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	| start   | -p embed-certs-039066                                  | embed-certs-039066        | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-849442                           | kubernetes-upgrade-849442 | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	| start   | -p kubernetes-upgrade-849442                           | kubernetes-upgrade-849442 | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:20 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-157588                  | no-preload-157588         | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC | 23 Feb 24 01:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-157588                                   | no-preload-157588         | jenkins | v1.32.0 | 23 Feb 24 01:16 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-039066            | embed-certs-039066        | jenkins | v1.32.0 | 23 Feb 24 01:17 UTC | 23 Feb 24 01:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-039066                                  | embed-certs-039066        | jenkins | v1.32.0 | 23 Feb 24 01:17 UTC | 23 Feb 24 01:17 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-039066                 | embed-certs-039066        | jenkins | v1.32.0 | 23 Feb 24 01:17 UTC | 23 Feb 24 01:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-039066                                  | embed-certs-039066        | jenkins | v1.32.0 | 23 Feb 24 01:17 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-849442                           | kubernetes-upgrade-849442 | jenkins | v1.32.0 | 23 Feb 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-849442                           | kubernetes-upgrade-849442 | jenkins | v1.32.0 | 23 Feb 24 01:20 UTC | 23 Feb 24 01:21 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=docker                             |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 01:20:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 01:20:48.866989  719706 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:20:48.867285  719706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:20:48.867300  719706 out.go:304] Setting ErrFile to fd 2...
	I0223 01:20:48.867306  719706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:20:48.867491  719706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:20:48.868173  719706 out.go:298] Setting JSON to false
	I0223 01:20:48.869815  719706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7398,"bootTime":1708643851,"procs":391,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:20:48.869962  719706 start.go:139] virtualization: kvm guest
	I0223 01:20:48.872132  719706 out.go:177] * [kubernetes-upgrade-849442] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:20:48.873421  719706 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:20:48.873483  719706 notify.go:220] Checking for updates...
	I0223 01:20:48.874878  719706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:20:48.876630  719706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:20:48.877996  719706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:20:48.879319  719706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:20:48.880533  719706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:20:48.882290  719706 config.go:182] Loaded profile config "kubernetes-upgrade-849442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0223 01:20:48.882776  719706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:20:48.910420  719706 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:20:48.910635  719706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:20:48.963253  719706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:97 SystemTime:2024-02-23 01:20:48.954154244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:20:48.963358  719706 docker.go:295] overlay module found
	I0223 01:20:48.966189  719706 out.go:177] * Using the docker driver based on existing profile
	I0223 01:20:48.967661  719706 start.go:299] selected driver: docker
	I0223 01:20:48.967688  719706 start.go:903] validating driver "docker" against &{Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-849442 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:20:48.967788  719706 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:20:48.968639  719706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:20:49.022363  719706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:97 SystemTime:2024-02-23 01:20:49.01257277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:20:49.022841  719706 cni.go:84] Creating CNI manager for ""
	I0223 01:20:49.022878  719706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 01:20:49.022895  719706 start_flags.go:323] config:
	{Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:20:49.024967  719706 out.go:177] * Starting control plane node kubernetes-upgrade-849442 in cluster kubernetes-upgrade-849442
	I0223 01:20:49.026292  719706 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:20:49.027925  719706 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:20:49.029267  719706 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0223 01:20:49.029321  719706 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0223 01:20:49.029340  719706 cache.go:56] Caching tarball of preloaded images
	I0223 01:20:49.029347  719706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:20:49.029444  719706 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:20:49.029458  719706 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0223 01:20:49.029612  719706 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/config.json ...
	I0223 01:20:49.047071  719706 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:20:49.047094  719706 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:20:49.047106  719706 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:20:49.047155  719706 start.go:365] acquiring machines lock for kubernetes-upgrade-849442: {Name:mkd66510a1a6624e119fa7f76664596bf0f0ccdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:20:49.047243  719706 start.go:369] acquired machines lock for "kubernetes-upgrade-849442" in 50.084µs
	I0223 01:20:49.047267  719706 start.go:96] Skipping create...Using existing machine configuration
	I0223 01:20:49.047275  719706 fix.go:54] fixHost starting: 
	I0223 01:20:49.047594  719706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:20:49.062986  719706 fix.go:102] recreateIfNeeded on kubernetes-upgrade-849442: state=Running err=<nil>
	W0223 01:20:49.063015  719706 fix.go:128] unexpected machine state, will restart: <nil>
	I0223 01:20:49.064905  719706 out.go:177] * Updating the running docker "kubernetes-upgrade-849442" container ...
	I0223 01:20:44.870378  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:47.370990  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:48.133631  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:50.133764  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:49.066207  719706 machine.go:88] provisioning docker machine ...
	I0223 01:20:49.066239  719706 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-849442"
	I0223 01:20:49.066297  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:49.081910  719706 main.go:141] libmachine: Using SSH client type: native
	I0223 01:20:49.082189  719706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I0223 01:20:49.082210  719706 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-849442 && echo "kubernetes-upgrade-849442" | sudo tee /etc/hostname
	I0223 01:20:49.225089  719706 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-849442
	
	I0223 01:20:49.225182  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:49.241957  719706 main.go:141] libmachine: Using SSH client type: native
	I0223 01:20:49.242220  719706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I0223 01:20:49.242243  719706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-849442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-849442/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-849442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:20:49.374494  719706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:20:49.374523  719706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:20:49.374568  719706 ubuntu.go:177] setting up certificates
	I0223 01:20:49.374581  719706 provision.go:83] configureAuth start
	I0223 01:20:49.374645  719706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-849442
	I0223 01:20:49.393695  719706 provision.go:138] copyHostCerts
	I0223 01:20:49.393792  719706 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:20:49.393814  719706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:20:49.393896  719706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:20:49.394167  719706 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:20:49.394183  719706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:20:49.394225  719706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:20:49.394297  719706 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:20:49.394306  719706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:20:49.394334  719706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:20:49.394386  719706 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-849442 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-849442]
	I0223 01:20:49.605118  719706 provision.go:172] copyRemoteCerts
	I0223 01:20:49.605182  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:20:49.605223  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:49.623531  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:20:49.723323  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:20:49.745628  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:20:49.768716  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 01:20:49.789902  719706 provision.go:86] duration metric: configureAuth took 415.302289ms
	I0223 01:20:49.789934  719706 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:20:49.790136  719706 config.go:182] Loaded profile config "kubernetes-upgrade-849442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0223 01:20:49.790199  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:49.808010  719706 main.go:141] libmachine: Using SSH client type: native
	I0223 01:20:49.808241  719706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I0223 01:20:49.808267  719706 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:20:49.938560  719706 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:20:49.938587  719706 ubuntu.go:71] root file system type: overlay
	I0223 01:20:49.938743  719706 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:20:49.938815  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:49.957977  719706 main.go:141] libmachine: Using SSH client type: native
	I0223 01:20:49.958230  719706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I0223 01:20:49.958334  719706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:20:50.105509  719706 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:20:50.105596  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:50.123238  719706 main.go:141] libmachine: Using SSH client type: native
	I0223 01:20:50.123419  719706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I0223 01:20:50.123436  719706 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:20:50.263187  719706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:20:50.263214  719706 machine.go:91] provisioned docker machine in 1.196990369s
	I0223 01:20:50.263228  719706 start.go:300] post-start starting for "kubernetes-upgrade-849442" (driver="docker")
	I0223 01:20:50.263245  719706 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:20:50.263322  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:20:50.263370  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:50.280117  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:20:50.378940  719706 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:20:50.381913  719706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:20:50.381952  719706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:20:50.381960  719706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:20:50.381969  719706 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:20:50.381982  719706 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:20:50.382026  719706 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:20:50.382145  719706 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:20:50.382239  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:20:50.390228  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:20:50.414412  719706 start.go:303] post-start completed in 151.168686ms
	I0223 01:20:50.414486  719706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:20:50.414532  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:50.431108  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:20:50.523553  719706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:20:50.528087  719706 fix.go:56] fixHost completed within 1.480803699s
	I0223 01:20:50.528118  719706 start.go:83] releasing machines lock for "kubernetes-upgrade-849442", held for 1.480860363s
	I0223 01:20:50.528184  719706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-849442
	I0223 01:20:50.546556  719706 ssh_runner.go:195] Run: cat /version.json
	I0223 01:20:50.546619  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:50.546621  719706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:20:50.546694  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:20:50.564764  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:20:50.565989  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:20:50.753431  719706 ssh_runner.go:195] Run: systemctl --version
	I0223 01:20:50.758466  719706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 01:20:50.762538  719706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 01:20:50.762609  719706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:20:50.771223  719706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:20:50.778981  719706 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 01:20:50.779005  719706 start.go:475] detecting cgroup driver to use...
	I0223 01:20:50.779032  719706 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:20:50.779139  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:20:50.794288  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 01:20:50.803480  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:20:50.813058  719706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:20:50.813131  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:20:50.822038  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:20:50.830948  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:20:50.839636  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:20:50.848352  719706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:20:50.856510  719706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:20:50.865967  719706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:20:50.874783  719706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:20:50.882817  719706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:20:50.969491  719706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:20:49.869965  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:51.870200  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:54.370845  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:52.133892  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:54.134988  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:56.634721  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:56.870270  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:59.369954  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:20:59.134123  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:01.134482  688193 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:01.278876  719706 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.309321129s)
	I0223 01:21:01.278911  719706 start.go:475] detecting cgroup driver to use...
	I0223 01:21:01.278972  719706 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:21:01.279017  719706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:21:01.291390  719706 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:21:01.291464  719706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:21:01.302435  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:21:01.320243  719706 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:21:01.323446  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:21:01.331951  719706 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:21:01.373346  719706 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:21:01.491545  719706 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:21:01.598138  719706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:21:01.598288  719706 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:21:01.616882  719706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:21:01.710380  719706 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:21:01.970676  719706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0223 01:21:01.981194  719706 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0223 01:21:01.996243  719706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0223 01:21:02.006661  719706 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 01:21:02.090314  719706 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 01:21:02.170923  719706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:21:02.252882  719706 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 01:21:02.265744  719706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0223 01:21:02.278498  719706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:21:02.397107  719706 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0223 01:21:02.470503  719706 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 01:21:02.470607  719706 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 01:21:02.474596  719706 start.go:543] Will wait 60s for crictl version
	I0223 01:21:02.474655  719706 ssh_runner.go:195] Run: which crictl
	I0223 01:21:02.477774  719706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 01:21:02.526242  719706 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0223 01:21:02.526298  719706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:21:02.554485  719706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:21:02.580774  719706 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0223 01:21:02.580852  719706 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-849442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:21:02.597176  719706 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0223 01:21:02.601115  719706 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0223 01:21:02.601180  719706 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:21:02.622565  719706 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:21:02.622587  719706 docker.go:615] Images already preloaded, skipping extraction
	I0223 01:21:02.622661  719706 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:21:02.642966  719706 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:21:02.642999  719706 cache_images.go:84] Images are preloaded, skipping loading
	I0223 01:21:02.643064  719706 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:21:02.712315  719706 cni.go:84] Creating CNI manager for ""
	I0223 01:21:02.712363  719706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 01:21:02.712386  719706 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:21:02.712407  719706 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-849442 NodeName:kubernetes-upgrade-849442 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0223 01:21:02.712629  719706 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-849442"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:21:02.712767  719706 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-849442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:21:02.712845  719706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0223 01:21:02.724219  719706 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:21:02.724293  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:21:02.733785  719706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0223 01:21:02.789923  719706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0223 01:21:02.876579  719706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0223 01:21:02.901532  719706 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:21:02.906508  719706 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442 for IP: 192.168.76.2
	I0223 01:21:02.906552  719706 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:21:02.906744  719706 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:21:02.906817  719706 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:21:02.906939  719706 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key
	I0223 01:21:02.907016  719706 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key.31bdca25
	I0223 01:21:02.907094  719706 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key
	I0223 01:21:02.907252  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:21:02.907295  719706 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:21:02.907305  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:21:02.907342  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:21:02.907373  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:21:02.907401  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:21:02.907454  719706 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:21:02.908447  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:21:02.995862  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:21:03.026170  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:21:03.100673  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 01:21:03.200673  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:21:03.301454  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:21:03.402972  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:21:03.488260  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:21:03.570674  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:21:03.599454  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:21:03.679459  719706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:21:03.773197  719706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:21:03.796412  719706 ssh_runner.go:195] Run: openssl version
	I0223 01:21:03.803590  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:21:01.870410  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:03.870908  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:03.128747  688193 pod_ready.go:81] duration metric: took 4m0.00041154s waiting for pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace to be "Ready" ...
	E0223 01:21:03.128790  688193 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-rbf22" in "kube-system" namespace to be "Ready" (will not retry!)
	I0223 01:21:03.128813  688193 pod_ready.go:38] duration metric: took 4m14.037782785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:21:03.128855  688193 kubeadm.go:640] restartCluster took 4m31.047942431s
	W0223 01:21:03.128932  688193 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0223 01:21:03.128978  688193 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0223 01:21:03.872816  719706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:21:03.876600  719706 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:21:03.876685  719706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:21:03.890273  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:21:03.902181  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:21:03.913434  719706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:21:03.916904  719706 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:21:03.916953  719706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:21:03.977402  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:21:03.989195  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:21:04.002312  719706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:21:04.006434  719706 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:21:04.006490  719706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:21:04.017509  719706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:21:04.080725  719706 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:21:04.084469  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0223 01:21:04.093427  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0223 01:21:04.101279  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0223 01:21:04.108538  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0223 01:21:04.170531  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0223 01:21:04.179309  719706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0223 01:21:04.185893  719706 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-849442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-849442 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:21:04.186042  719706 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:21:04.210843  719706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:21:04.273633  719706 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0223 01:21:04.273668  719706 kubeadm.go:636] restartCluster start
	I0223 01:21:04.273737  719706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 01:21:04.282635  719706 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:21:04.283522  719706 kubeconfig.go:92] found "kubernetes-upgrade-849442" server: "https://192.168.76.2:8443"
	I0223 01:21:04.284535  719706 kapi.go:59] client config for kubernetes-upgrade-849442: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key", CAFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ab80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 01:21:04.285132  719706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 01:21:04.293806  719706 api_server.go:166] Checking apiserver status ...
	I0223 01:21:04.293889  719706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:21:04.305472  719706 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14630/cgroup
	I0223 01:21:04.314831  719706 api_server.go:182] apiserver freezer: "10:freezer:/docker/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/kubepods/burstable/poddf7c953fa190f269f0c67ebc988d3399/95f7e9cfbf4ec304aa296a5a5cd827796c2a342307440f4c1e7b56b1b6ef0569"
	I0223 01:21:04.314906  719706 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e8c08b5ebc65fa92db94e1d88f8fb0749956fed0ca8e4684ec175ad53c42436d/kubepods/burstable/poddf7c953fa190f269f0c67ebc988d3399/95f7e9cfbf4ec304aa296a5a5cd827796c2a342307440f4c1e7b56b1b6ef0569/freezer.state
	I0223 01:21:04.374557  719706 api_server.go:204] freezer state: "THAWED"
	I0223 01:21:04.374593  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:06.007901  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 01:21:06.007949  719706 retry.go:31] will retry after 224.50814ms: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 01:21:06.233352  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:06.237897  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:06.237939  719706 retry.go:31] will retry after 334.424162ms: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:06.572493  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:06.576817  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:06.576864  719706 retry.go:31] will retry after 455.968278ms: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:07.033120  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:07.037402  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:07.037457  719706 retry.go:31] will retry after 542.641153ms: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:07.581147  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:07.585205  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0223 01:21:07.600330  719706 system_pods.go:86] 5 kube-system pods found
	I0223 01:21:07.600372  719706 system_pods.go:89] "etcd-kubernetes-upgrade-849442" [6a88c3f3-1254-4005-a508-7140a267e750] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 01:21:07.600385  719706 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-849442" [e501f128-8e0b-4c82-983e-65dc3f551efa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 01:21:07.600399  719706 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-849442" [0967f368-c11a-4785-8222-9d16f3a35b62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 01:21:07.600411  719706 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-849442" [ae6dc045-1493-4a9f-bff1-07cb53a47cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 01:21:07.600426  719706 system_pods.go:89] "storage-provisioner" [9c065abc-2897-46ce-a091-8c3be267c7bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0223 01:21:07.600442  719706 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0223 01:21:07.600456  719706 kubeadm.go:1135] stopping kube-system containers ...
	I0223 01:21:07.600515  719706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:21:07.624572  719706 docker.go:483] Stopping containers: [61fd81f830e0 5587f293735c 95f7e9cfbf4e a8e9486463e4 6fd80ab30477 70a9f28a33b2 7443725fd420 691719fd4f38 0d9742a7167e a8d2207d87dc a409d6b9943e 3826a0d67cba 70d5f0010e61 4cbd664bcfc3 7e4d0ea7409f a78037172539]
	I0223 01:21:07.624673  719706 ssh_runner.go:195] Run: docker stop 61fd81f830e0 5587f293735c 95f7e9cfbf4e a8e9486463e4 6fd80ab30477 70a9f28a33b2 7443725fd420 691719fd4f38 0d9742a7167e a8d2207d87dc a409d6b9943e 3826a0d67cba 70d5f0010e61 4cbd664bcfc3 7e4d0ea7409f a78037172539
	I0223 01:21:08.111197  719706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 01:21:08.258588  719706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:21:08.268445  719706 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5647 Feb 23 01:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 01:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 23 01:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 23 01:20 /etc/kubernetes/scheduler.conf
	
	I0223 01:21:08.268520  719706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 01:21:08.278274  719706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 01:21:08.288159  719706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 01:21:08.298196  719706 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:21:08.298270  719706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 01:21:08.306209  719706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 01:21:08.314143  719706 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:21:08.314208  719706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 01:21:08.322901  719706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:21:08.331225  719706 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 01:21:08.331264  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:08.379535  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:06.369652  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:08.371121  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:10.391577  688193 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (7.262568683s)
	I0223 01:21:10.391653  688193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:21:10.403502  688193 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:21:10.412425  688193 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:21:10.412478  688193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:21:10.420211  688193 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:21:10.420250  688193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:21:10.458679  688193 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0223 01:21:10.458732  688193 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:21:10.519963  688193 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:21:10.520097  688193 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:21:10.520162  688193 kubeadm.go:322] OS: Linux
	I0223 01:21:10.520229  688193 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:21:10.520298  688193 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:21:10.520365  688193 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:21:10.520433  688193 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:21:10.520502  688193 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:21:10.520573  688193 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:21:10.520637  688193 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0223 01:21:10.520710  688193 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0223 01:21:10.520779  688193 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0223 01:21:10.590133  688193 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:21:10.590273  688193 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:21:10.590377  688193 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:21:10.906727  688193 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:21:10.909883  688193 out.go:204]   - Generating certificates and keys ...
	I0223 01:21:10.909989  688193 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:21:10.910093  688193 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:21:10.910208  688193 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:21:10.910303  688193 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:21:10.910414  688193 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:21:10.910543  688193 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:21:10.910644  688193 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:21:10.910736  688193 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:21:10.910855  688193 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:21:10.911161  688193 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:21:10.911573  688193 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:21:10.911684  688193 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:21:11.082462  688193 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:21:11.166912  688193 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0223 01:21:11.542410  688193 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:21:11.773393  688193 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:21:12.003939  688193 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:21:12.004502  688193 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:21:12.006643  688193 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:21:09.039254  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:09.200389  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:09.258379  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:09.321158  719706 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:21:09.321265  719706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:21:09.821376  719706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:21:10.321405  719706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:21:10.333677  719706 api_server.go:72] duration metric: took 1.012518863s to wait for apiserver process to appear ...
	I0223 01:21:10.333707  719706 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:21:10.333743  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:13.377167  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 01:21:13.377199  719706 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 01:21:13.377213  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:13.478739  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0223 01:21:13.478778  719706 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:13.834027  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:13.837610  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0223 01:21:13.837647  719706 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:10.870138  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:12.871287  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ttsqs" in "kube-system" namespace has status "Ready":"False"
	I0223 01:21:14.334435  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:14.338949  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0223 01:21:14.338982  719706 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0223 01:21:14.834378  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:14.838152  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0223 01:21:14.844444  719706 api_server.go:141] control plane version: v1.29.0-rc.2
	I0223 01:21:14.844474  719706 api_server.go:131] duration metric: took 4.510757649s to wait for apiserver health ...
	I0223 01:21:14.844489  719706 cni.go:84] Creating CNI manager for ""
	I0223 01:21:14.844505  719706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 01:21:14.846366  719706 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 01:21:14.847565  719706 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 01:21:14.856657  719706 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 01:21:14.880802  719706 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:21:14.888796  719706 system_pods.go:59] 5 kube-system pods found
	I0223 01:21:14.888837  719706 system_pods.go:61] "etcd-kubernetes-upgrade-849442" [6a88c3f3-1254-4005-a508-7140a267e750] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 01:21:14.888848  719706 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-849442" [e501f128-8e0b-4c82-983e-65dc3f551efa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 01:21:14.888862  719706 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-849442" [0967f368-c11a-4785-8222-9d16f3a35b62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 01:21:14.888883  719706 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-849442" [ae6dc045-1493-4a9f-bff1-07cb53a47cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 01:21:14.888891  719706 system_pods.go:61] "storage-provisioner" [9c065abc-2897-46ce-a091-8c3be267c7bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0223 01:21:14.888906  719706 system_pods.go:74] duration metric: took 7.992495ms to wait for pod list to return data ...
	I0223 01:21:14.888924  719706 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:21:14.892401  719706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:21:14.892433  719706 node_conditions.go:123] node cpu capacity is 8
	I0223 01:21:14.892448  719706 node_conditions.go:105] duration metric: took 3.511134ms to run NodePressure ...
	I0223 01:21:14.892470  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:21:15.157148  719706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 01:21:15.164240  719706 ops.go:34] apiserver oom_adj: -16
	I0223 01:21:15.164260  719706 kubeadm.go:640] restartCluster took 10.890584158s
	I0223 01:21:15.164268  719706 kubeadm.go:406] StartCluster complete in 10.978387511s
	I0223 01:21:15.164286  719706 settings.go:142] acquiring lock: {Name:mkdd07176a1016ae9ca7d71258b6199ead689cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:21:15.164386  719706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:21:15.165366  719706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:21:15.165619  719706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 01:21:15.165724  719706 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0223 01:21:15.165811  719706 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-849442"
	I0223 01:21:15.165818  719706 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-849442"
	I0223 01:21:15.165823  719706 config.go:182] Loaded profile config "kubernetes-upgrade-849442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0223 01:21:15.165835  719706 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-849442"
	I0223 01:21:15.165836  719706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-849442"
	W0223 01:21:15.165845  719706 addons.go:243] addon storage-provisioner should already be in state true
	I0223 01:21:15.165903  719706 host.go:66] Checking if "kubernetes-upgrade-849442" exists ...
	I0223 01:21:15.166236  719706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:21:15.166460  719706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:21:15.166543  719706 kapi.go:59] client config for kubernetes-upgrade-849442: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key", CAFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ab80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 01:21:15.169863  719706 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-849442" context rescaled to 1 replicas
	I0223 01:21:15.169905  719706 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 01:21:15.172789  719706 out.go:177] * Verifying Kubernetes components...
	I0223 01:21:15.174162  719706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:21:15.190182  719706 kapi.go:59] client config for kubernetes-upgrade-849442: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubernetes-upgrade-849442/client.key", CAFile:"/home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ab80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 01:21:15.190512  719706 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-849442"
	W0223 01:21:15.190529  719706 addons.go:243] addon default-storageclass should already be in state true
	I0223 01:21:15.190558  719706 host.go:66] Checking if "kubernetes-upgrade-849442" exists ...
	I0223 01:21:15.191095  719706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-849442 --format={{.State.Status}}
	I0223 01:21:15.197191  719706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:21:15.198812  719706 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:21:15.198835  719706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 01:21:15.198890  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:21:15.223561  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:21:15.223921  719706 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 01:21:15.223948  719706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 01:21:15.224023  719706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-849442
	I0223 01:21:15.249483  719706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/kubernetes-upgrade-849442/id_rsa Username:docker}
	I0223 01:21:15.267095  719706 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:21:15.267169  719706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:21:15.267095  719706 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 01:21:15.280853  719706 api_server.go:72] duration metric: took 110.910731ms to wait for apiserver process to appear ...
	I0223 01:21:15.280880  719706 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:21:15.280910  719706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0223 01:21:15.285488  719706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0223 01:21:15.286766  719706 api_server.go:141] control plane version: v1.29.0-rc.2
	I0223 01:21:15.286786  719706 api_server.go:131] duration metric: took 5.898943ms to wait for apiserver health ...
	I0223 01:21:15.286793  719706 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:21:15.292658  719706 system_pods.go:59] 5 kube-system pods found
	I0223 01:21:15.292689  719706 system_pods.go:61] "etcd-kubernetes-upgrade-849442" [6a88c3f3-1254-4005-a508-7140a267e750] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 01:21:15.292700  719706 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-849442" [e501f128-8e0b-4c82-983e-65dc3f551efa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 01:21:15.292713  719706 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-849442" [0967f368-c11a-4785-8222-9d16f3a35b62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 01:21:15.292722  719706 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-849442" [ae6dc045-1493-4a9f-bff1-07cb53a47cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 01:21:15.292729  719706 system_pods.go:61] "storage-provisioner" [9c065abc-2897-46ce-a091-8c3be267c7bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0223 01:21:15.292742  719706 system_pods.go:74] duration metric: took 5.942132ms to wait for pod list to return data ...
	I0223 01:21:15.292758  719706 kubeadm.go:581] duration metric: took 122.820098ms to wait for : map[apiserver:true system_pods:true] ...
	I0223 01:21:15.292776  719706 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:21:15.295187  719706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:21:15.295207  719706 node_conditions.go:123] node cpu capacity is 8
	I0223 01:21:15.295219  719706 node_conditions.go:105] duration metric: took 2.437713ms to run NodePressure ...
	I0223 01:21:15.295229  719706 start.go:228] waiting for startup goroutines ...
	I0223 01:21:15.338296  719706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:21:15.361959  719706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 01:21:16.055025  719706 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 01:21:16.056301  719706 addons.go:505] enable addons completed in 890.579675ms: enabled=[storage-provisioner default-storageclass]
	I0223 01:21:16.056358  719706 start.go:233] waiting for cluster config update ...
	I0223 01:21:16.056370  719706 start.go:242] writing updated cluster config ...
	I0223 01:21:16.056616  719706 ssh_runner.go:195] Run: rm -f paused
	I0223 01:21:16.127106  719706 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0223 01:21:16.129948  719706 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-849442" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Loaded network plugin cni"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Docker Info: &{ID:a64518ee-4108-48c8-9fad-e89daf904596 Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:15 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-02-23T01:21:02.461092323Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubunt
u 22.04.3 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0000cca10 NCPU:8 MemTotal:33647996928 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:kubernetes-upgrade-849442 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] Product
License: DefaultAddressPools:[] Warnings:[]}"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 23 01:21:02 kubernetes-upgrade-849442 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/691719fd4f380f652503e6cdce9723f1f78f6509bae59d388eacaf46c9f7a8e1/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7443725fd420cc6f0c20331edd620e4a173e1aa6c556f7a0ea8944a2f0c83eff/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/70a9f28a33b2fef5016ae7ca4b36e269fee886c74230242045e02f6fb2a22cb3/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Feb 23 01:21:02 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fd80ab304772e6a95ec412f5a3dfe539a79002a6f97566369c0500a7e66e369/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.700138837Z" level=info msg="ignoring event" container=5587f293735c0ccb948fb8603b6071c92f9cd31fc472349ee15825ce1e750a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.702148265Z" level=info msg="ignoring event" container=6fd80ab304772e6a95ec412f5a3dfe539a79002a6f97566369c0500a7e66e369 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.702187466Z" level=info msg="ignoring event" container=70a9f28a33b2fef5016ae7ca4b36e269fee886c74230242045e02f6fb2a22cb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.705200416Z" level=info msg="ignoring event" container=a8e9486463e4a11e4118e1c4ed23cb414c3fcfe3b7ad4637ede3207a2d63d970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.706374740Z" level=info msg="ignoring event" container=7443725fd420cc6f0c20331edd620e4a173e1aa6c556f7a0ea8944a2f0c83eff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.772475685Z" level=info msg="ignoring event" container=691719fd4f380f652503e6cdce9723f1f78f6509bae59d388eacaf46c9f7a8e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:07 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:07.788092079Z" level=info msg="ignoring event" container=61fd81f830e0a47d09662c168fff98f762fec84d49a2c736d86630af4613c6b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:08 kubernetes-upgrade-849442 dockerd[13855]: time="2024-02-23T01:21:08.085285686Z" level=info msg="ignoring event" container=95f7e9cfbf4ec304aa296a5a5cd827796c2a342307440f4c1e7b56b1b6ef0569 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:21:08 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c5e323363da40ea08f4722f75daf91194091dd4d48b2da9362103c63b6ab645/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Feb 23 01:21:08 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a52b632e36cb1fb4e0ead4367221fa6284c13c2e085a7fdafe498a8b7debb5fb/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Feb 23 01:21:08 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/848965771ea6210adf6190eba6bcc926c9d6561011e43fc742afe225a5f26c07/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Feb 23 01:21:08 kubernetes-upgrade-849442 cri-dockerd[14159]: time="2024-02-23T01:21:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af174b2571580bb74a9642137d8cd7bc3067a33bfda7f89e1efcc6c69ffef2ca/resolv.conf as [nameserver 192.168.76.1 search us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Feb 23 01:21:08 kubernetes-upgrade-849442 cri-dockerd[14159]: W0223 01:21:08.261957   14159 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a7add0baefa0       4270645ed6b7a       7 seconds ago       Running             kube-scheduler            2                   a52b632e36cb1       kube-scheduler-kubernetes-upgrade-849442
	b2213bb3b14d6       d4e01cdf63970       7 seconds ago       Running             kube-controller-manager   2                   848965771ea62       kube-controller-manager-kubernetes-upgrade-849442
	e07f3ed51c223       bbb47a0f83324       7 seconds ago       Running             kube-apiserver            2                   af174b2571580       kube-apiserver-kubernetes-upgrade-849442
	cb373e575d349       a0eed15eed449       7 seconds ago       Running             etcd                      2                   8c5e323363da4       etcd-kubernetes-upgrade-849442
	61fd81f830e0a       a0eed15eed449       14 seconds ago      Exited              etcd                      1                   6fd80ab304772       etcd-kubernetes-upgrade-849442
	5587f293735c0       d4e01cdf63970       14 seconds ago      Exited              kube-controller-manager   1                   70a9f28a33b2f       kube-controller-manager-kubernetes-upgrade-849442
	95f7e9cfbf4ec       bbb47a0f83324       14 seconds ago      Exited              kube-apiserver            1                   7443725fd420c       kube-apiserver-kubernetes-upgrade-849442
	a8e9486463e4a       4270645ed6b7a       14 seconds ago      Exited              kube-scheduler            1                   691719fd4f380       kube-scheduler-kubernetes-upgrade-849442
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-849442
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-849442
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60a1754c54128d325d930960488a4adf9d1d6f25
	                    minikube.k8s.io/name=kubernetes-upgrade-849442
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_23T01_20_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 23 Feb 2024 01:20:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-849442
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 23 Feb 2024 01:21:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 23 Feb 2024 01:21:13 +0000   Fri, 23 Feb 2024 01:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 23 Feb 2024 01:21:13 +0000   Fri, 23 Feb 2024 01:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 23 Feb 2024 01:21:13 +0000   Fri, 23 Feb 2024 01:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 23 Feb 2024 01:21:13 +0000   Fri, 23 Feb 2024 01:21:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-849442
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5b21105a05c4aabad982d377905ff03
	  System UUID:                97e88f13-c528-473f-9b06-e1e34c7ed1a7
	  Boot ID:                    a5b7b0cd-8cd7-42be-9dcf-b1b1f5c94b65
	  Kernel Version:             5.15.0-1051-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-849442                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-849442             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-849442    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-scheduler-kubernetes-upgrade-849442             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 30s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  30s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s              kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s              kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s              kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node kubernetes-upgrade-849442 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 22 a3 d5 31 48 08 06
	[  +0.000313] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 68 a4 eb 6b 0b 08 06
	[  +0.693280] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 01 f0 ab 20 4c 08 06
	[  +0.000294] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa a9 7f 26 cb 1a 08 06
	[ +12.180204] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce cb 78 0b 55 05 08 06
	[  +0.000445] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 73 43 11 9e 29 08 06
	[Feb23 01:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce 38 d7 15 a7 4a 08 06
	[ +12.376348] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 38 60 1e 90 01 08 06
	[  +0.000343] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 38 d7 15 a7 4a 08 06
	[  +3.127808] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a e0 0a a4 95 d4 08 06
	[Feb23 01:16] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 2f 60 3e 6f 7e 08 06
	[  +2.561480] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e 33 de a6 1d b4 08 06
	[Feb23 01:17] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 40 dd 8b cc 1d 08 06
	
	
	==> etcd [61fd81f830e0] <==
	{"level":"info","ts":"2024-02-23T01:21:03.391607Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-23T01:21:04.779773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-23T01:21:04.779827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-23T01:21:04.779851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-23T01:21:04.779865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:04.779874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:04.779886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:04.779899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:04.781084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-23T01:21:04.781263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-23T01:21:04.781333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-23T01:21:04.781078Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-849442 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-23T01:21:04.781113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-23T01:21:04.783656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-23T01:21:04.78384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-23T01:21:07.657664Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-23T01:21:07.657751Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-849442","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-23T01:21:07.657843Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-23T01:21:07.657978Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-23T01:21:07.682462Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-23T01:21:07.682533Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-23T01:21:07.682589Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-23T01:21:07.686554Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-23T01:21:07.686693Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-23T01:21:07.686719Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-849442","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [cb373e575d34] <==
	{"level":"info","ts":"2024-02-23T01:21:10.17833Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-23T01:21:10.178339Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-23T01:21:10.178586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-23T01:21:10.178644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-23T01:21:10.178735Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-23T01:21:10.178768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-23T01:21:10.181005Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-23T01:21:10.181215Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-23T01:21:10.181253Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-23T01:21:10.181386Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-23T01:21:10.181396Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-23T01:21:12.013761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:12.01384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:12.013859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-23T01:21:12.013879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-23T01:21:12.0139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-23T01:21:12.013912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-23T01:21:12.013928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-23T01:21:12.014844Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-849442 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-23T01:21:12.014919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-23T01:21:12.015001Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-23T01:21:12.014987Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-23T01:21:12.015058Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-23T01:21:12.017612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-23T01:21:12.017649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 01:21:17 up  2:03,  0 users,  load average: 4.38, 2.76, 2.55
	Linux kubernetes-upgrade-849442 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [95f7e9cfbf4e] <==
	W0223 01:21:07.670628       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0223 01:21:07.670666       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0223 01:21:07.670686       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670729       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670777       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670820       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670858       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670898       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670952       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.670955       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0223 01:21:07.671145       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0223 01:21:07.671277       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0223 01:21:07.671311       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0223 01:21:07.671377       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0223 01:21:07.671512       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.671559       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 01:21:07.671598       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0223 01:21:07.671606       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0223 01:21:07.671632       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0223 01:21:07.671694       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0223 01:21:07.671766       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0223 01:21:07.671762       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0223 01:21:07.671779       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0223 01:21:07.671808       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0223 01:21:07.672580       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e07f3ed51c22] <==
	I0223 01:21:13.319363       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0223 01:21:13.319383       1 naming_controller.go:291] Starting NamingConditionController
	I0223 01:21:13.319400       1 establishing_controller.go:76] Starting EstablishingController
	I0223 01:21:13.319415       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0223 01:21:13.319445       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0223 01:21:13.470736       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 01:21:13.470936       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0223 01:21:13.470960       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0223 01:21:13.471588       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0223 01:21:13.471738       1 shared_informer.go:318] Caches are synced for configmaps
	I0223 01:21:13.471785       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0223 01:21:13.471802       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0223 01:21:13.471941       1 aggregator.go:165] initial CRD sync complete...
	I0223 01:21:13.471952       1 autoregister_controller.go:141] Starting autoregister controller
	I0223 01:21:13.471959       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0223 01:21:13.471965       1 cache.go:39] Caches are synced for autoregister controller
	I0223 01:21:13.473369       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 01:21:13.488125       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0223 01:21:13.488442       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0223 01:21:14.321636       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 01:21:14.989506       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0223 01:21:14.998101       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0223 01:21:15.026667       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0223 01:21:15.050353       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 01:21:15.055867       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [5587f293735c] <==
	I0223 01:21:04.196018       1 serving.go:380] Generated self-signed cert in-memory
	I0223 01:21:04.665473       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0223 01:21:04.665564       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 01:21:04.667219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0223 01:21:04.670184       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 01:21:04.670270       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 01:21:04.670317       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [b2213bb3b14d] <==
	I0223 01:21:15.690393       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0223 01:21:15.690433       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0223 01:21:15.690457       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0223 01:21:15.690480       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	W0223 01:21:15.690500       1 shared_informer.go:591] resyncPeriod 16h40m55.702592271s is smaller than resyncCheckPeriod 23h37m6.375309026s and the informer has already started. Changing it to 23h37m6.375309026s
	I0223 01:21:15.690584       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0223 01:21:15.690614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0223 01:21:15.690642       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	W0223 01:21:15.690654       1 shared_informer.go:591] resyncPeriod 17h32m48.841595169s is smaller than resyncCheckPeriod 23h37m6.375309026s and the informer has already started. Changing it to 23h37m6.375309026s
	I0223 01:21:15.690696       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0223 01:21:15.690756       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0223 01:21:15.690782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0223 01:21:15.690805       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0223 01:21:15.690831       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0223 01:21:15.690854       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0223 01:21:15.690875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0223 01:21:15.690936       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0223 01:21:15.690978       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0223 01:21:15.690992       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0223 01:21:15.691039       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0223 01:21:15.839251       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0223 01:21:15.839393       1 stateful_set.go:161] "Starting stateful set controller"
	I0223 01:21:15.839408       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0223 01:21:15.986559       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0223 01:21:15.986611       1 cleaner.go:83] "Starting CSR cleaner controller"
	
	
	==> kube-scheduler [2a7add0baefa] <==
	I0223 01:21:11.094797       1 serving.go:380] Generated self-signed cert in-memory
	W0223 01:21:13.386707       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 01:21:13.386739       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 01:21:13.386751       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 01:21:13.386760       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 01:21:13.480523       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0223 01:21:13.480562       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 01:21:13.482121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 01:21:13.482171       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 01:21:13.482814       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0223 01:21:13.483101       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 01:21:13.582937       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a8e9486463e4] <==
	I0223 01:21:04.545250       1 serving.go:380] Generated self-signed cert in-memory
	W0223 01:21:06.079843       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 01:21:06.079894       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0223 01:21:06.079909       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 01:21:06.079949       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 01:21:06.094567       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0223 01:21:06.094600       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 01:21:06.095998       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 01:21:06.096026       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 01:21:06.096512       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0223 01:21:06.097132       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 01:21:06.198723       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 01:21:07.679120       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0223 01:21:07.679174       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0223 01:21:07.680246       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0223 01:21:07.682351       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.706866   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.706903   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.706969   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.706995   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/64d4f267eaac0c208461160033de4ea2-etcd-data\") pod \"etcd-kubernetes-upgrade-849442\" (UID: \"64d4f267eaac0c208461160033de4ea2\") " pod="kube-system/etcd-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707023   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df7c953fa190f269f0c67ebc988d3399-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-849442\" (UID: \"df7c953fa190f269f0c67ebc988d3399\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707065   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707132   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df7c953fa190f269f0c67ebc988d3399-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-849442\" (UID: \"df7c953fa190f269f0c67ebc988d3399\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707181   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df7c953fa190f269f0c67ebc988d3399-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-849442\" (UID: \"df7c953fa190f269f0c67ebc988d3399\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707213   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707247   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707285   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfd6ab20aca52fcbff5920adc0f655ec-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-849442\" (UID: \"dfd6ab20aca52fcbff5920adc0f655ec\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.707319   15340 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/261402ed7ed0ab69f21701993e46cc4e-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-849442\" (UID: \"261402ed7ed0ab69f21701993e46cc4e\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-849442"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: E0223 01:21:09.906253   15340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-849442?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.938641   15340 scope.go:117] "RemoveContainer" containerID="61fd81f830e0a47d09662c168fff98f762fec84d49a2c736d86630af4613c6b5"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.944786   15340 scope.go:117] "RemoveContainer" containerID="95f7e9cfbf4ec304aa296a5a5cd827796c2a342307440f4c1e7b56b1b6ef0569"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.955536   15340 scope.go:117] "RemoveContainer" containerID="5587f293735c0ccb948fb8603b6071c92f9cd31fc472349ee15825ce1e750a14"
	Feb 23 01:21:09 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:09.959564   15340 scope.go:117] "RemoveContainer" containerID="a8e9486463e4a11e4118e1c4ed23cb414c3fcfe3b7ad4637ede3207a2d63d970"
	Feb 23 01:21:10 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:10.079229   15340 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-849442"
	Feb 23 01:21:10 kubernetes-upgrade-849442 kubelet[15340]: E0223 01:21:10.080400   15340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-849442"
	Feb 23 01:21:10 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:10.889682   15340 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-849442"
	Feb 23 01:21:13 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:13.497478   15340 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-849442"
	Feb 23 01:21:13 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:13.497741   15340 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-849442"
	Feb 23 01:21:13 kubernetes-upgrade-849442 kubelet[15340]: E0223 01:21:13.624857   15340 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-849442\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-849442"
	Feb 23 01:21:14 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:14.293294   15340 apiserver.go:52] "Watching apiserver"
	Feb 23 01:21:14 kubernetes-upgrade-849442 kubelet[15340]: I0223 01:21:14.304205   15340 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-849442 -n kubernetes-upgrade-849442
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-849442 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-849442 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-849442 describe pod storage-provisioner: exit status 1 (65.096188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-849442 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-849442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-849442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-849442: (2.17008183s)
--- FAIL: TestKubernetesUpgrade (814.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (506.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (8m26.54539692s)

                                                
                                                
-- stdout --
	* [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 23 01:23:02 old-k8s-version-799707 kubelet[5695]: E0223 01:23:02.555197    5695 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:23:05 old-k8s-version-799707 kubelet[5695]: E0223 01:23:05.555141    5695 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:23:06 old-k8s-version-799707 kubelet[5695]: E0223 01:23:06.555495    5695 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 01:14:56.753006  662830 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:14:56.753183  662830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:14:56.753194  662830 out.go:304] Setting ErrFile to fd 2...
	I0223 01:14:56.753201  662830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:14:56.753482  662830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:14:56.754220  662830 out.go:298] Setting JSON to false
	I0223 01:14:56.755677  662830 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7046,"bootTime":1708643851,"procs":459,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:14:56.755759  662830 start.go:139] virtualization: kvm guest
	I0223 01:14:56.758083  662830 out.go:177] * [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:14:56.759729  662830 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:14:56.759750  662830 notify.go:220] Checking for updates...
	I0223 01:14:56.762376  662830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:14:56.763831  662830 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:14:56.765364  662830 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:14:56.766768  662830 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:14:56.768279  662830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:14:56.770309  662830 config.go:182] Loaded profile config "enable-default-cni-600346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 01:14:56.770481  662830 config.go:182] Loaded profile config "kubenet-600346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 01:14:56.770597  662830 config.go:182] Loaded profile config "kubernetes-upgrade-849442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:14:56.770728  662830 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:14:56.799470  662830 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:14:56.799618  662830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:14:56.862794  662830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:80 SystemTime:2024-02-23 01:14:56.850889026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:14:56.862902  662830 docker.go:295] overlay module found
	I0223 01:14:56.864874  662830 out.go:177] * Using the docker driver based on user configuration
	I0223 01:14:56.866226  662830 start.go:299] selected driver: docker
	I0223 01:14:56.866242  662830 start.go:903] validating driver "docker" against <nil>
	I0223 01:14:56.866254  662830 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:14:56.867187  662830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:14:56.933848  662830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:77 SystemTime:2024-02-23 01:14:56.92293351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:14:56.934026  662830 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 01:14:56.934708  662830 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 01:14:56.936509  662830 out.go:177] * Using Docker driver with root privileges
	I0223 01:14:56.937941  662830 cni.go:84] Creating CNI manager for ""
	I0223 01:14:56.937973  662830 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:14:56.937988  662830 start_flags.go:323] config:
	{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:14:56.939764  662830 out.go:177] * Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	I0223 01:14:56.941145  662830 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:14:56.943284  662830 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:14:56.944718  662830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:14:56.944763  662830 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 01:14:56.944779  662830 cache.go:56] Caching tarball of preloaded images
	I0223 01:14:56.944783  662830 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:14:56.944855  662830 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:14:56.944868  662830 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 01:14:56.945000  662830 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:14:56.945022  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json: {Name:mkd1d8c2e3454dfcc4d18803053dd224c3f53d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:14:56.963276  662830 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:14:56.963309  662830 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:14:56.963337  662830 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:14:56.963382  662830 start.go:365] acquiring machines lock for old-k8s-version-799707: {Name:mkec58acc477a1259ea890fef71c8d064abcdc6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:14:56.963507  662830 start.go:369] acquired machines lock for "old-k8s-version-799707" in 100.39µs
	I0223 01:14:56.963545  662830 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 01:14:56.963697  662830 start.go:125] createHost starting for "" (driver="docker")
	I0223 01:14:56.969729  662830 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 01:14:56.970045  662830 start.go:159] libmachine.API.Create for "old-k8s-version-799707" (driver="docker")
	I0223 01:14:56.970101  662830 client.go:168] LocalClient.Create starting
	I0223 01:14:56.970198  662830 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem
	I0223 01:14:56.970239  662830 main.go:141] libmachine: Decoding PEM data...
	I0223 01:14:56.970256  662830 main.go:141] libmachine: Parsing certificate...
	I0223 01:14:56.970319  662830 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem
	I0223 01:14:56.970352  662830 main.go:141] libmachine: Decoding PEM data...
	I0223 01:14:56.970370  662830 main.go:141] libmachine: Parsing certificate...
	I0223 01:14:56.970742  662830 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 01:14:56.999086  662830 cli_runner.go:211] docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 01:14:56.999197  662830 network_create.go:281] running [docker network inspect old-k8s-version-799707] to gather additional debugging logs...
	I0223 01:14:56.999225  662830 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707
	W0223 01:14:57.020090  662830 cli_runner.go:211] docker network inspect old-k8s-version-799707 returned with exit code 1
	I0223 01:14:57.020124  662830 network_create.go:284] error running [docker network inspect old-k8s-version-799707]: docker network inspect old-k8s-version-799707: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-799707 not found
	I0223 01:14:57.020141  662830 network_create.go:286] output of [docker network inspect old-k8s-version-799707]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-799707 not found
	
	** /stderr **
	I0223 01:14:57.020273  662830 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:14:57.038021  662830 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-695ca2766a58 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f4:90:90:f1} reservation:<nil>}
	I0223 01:14:57.038848  662830 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-db2da4de8123 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bf:69:28:2c} reservation:<nil>}
	I0223 01:14:57.039484  662830 network.go:212] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8049eee158fa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:10:7e:b5:e3} reservation:<nil>}
	I0223 01:14:57.040023  662830 network.go:212] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fddd82e2b023 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:5e:7b:0e:60} reservation:<nil>}
	I0223 01:14:57.040639  662830 network.go:212] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-95ad12b8a760 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:1e:b8:26:be} reservation:<nil>}
	I0223 01:14:57.041267  662830 network.go:207] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002245a10}
	I0223 01:14:57.041288  662830 network_create.go:124] attempt to create docker network old-k8s-version-799707 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0223 01:14:57.041330  662830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-799707 old-k8s-version-799707
	I0223 01:14:57.107392  662830 network_create.go:108] docker network old-k8s-version-799707 192.168.94.0/24 created
	I0223 01:14:57.107442  662830 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-799707" container
	I0223 01:14:57.107539  662830 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 01:14:57.129921  662830 cli_runner.go:164] Run: docker volume create old-k8s-version-799707 --label name.minikube.sigs.k8s.io=old-k8s-version-799707 --label created_by.minikube.sigs.k8s.io=true
	I0223 01:14:57.157191  662830 oci.go:103] Successfully created a docker volume old-k8s-version-799707
	I0223 01:14:57.157294  662830 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-799707-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-799707 --entrypoint /usr/bin/test -v old-k8s-version-799707:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0223 01:14:57.836239  662830 oci.go:107] Successfully prepared a docker volume old-k8s-version-799707
	I0223 01:14:57.836285  662830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:14:57.836309  662830 kic.go:194] Starting extracting preloaded images to volume ...
	I0223 01:14:57.836372  662830 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-799707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 01:15:05.391881  662830 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-799707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (7.555440631s)
	I0223 01:15:05.391922  662830 kic.go:203] duration metric: took 7.555609 seconds to extract preloaded images to volume
	W0223 01:15:05.392070  662830 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 01:15:05.392184  662830 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 01:15:05.456562  662830 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-799707 --name old-k8s-version-799707 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-799707 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-799707 --network old-k8s-version-799707 --ip 192.168.94.2 --volume old-k8s-version-799707:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0223 01:15:05.806732  662830 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Running}}
	I0223 01:15:05.827061  662830 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:15:05.847132  662830 cli_runner.go:164] Run: docker exec old-k8s-version-799707 stat /var/lib/dpkg/alternatives/iptables
	I0223 01:15:05.897318  662830 oci.go:144] the created container "old-k8s-version-799707" has a running status.
	I0223 01:15:05.897356  662830 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa...
	I0223 01:15:06.326861  662830 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 01:15:06.354546  662830 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:15:06.395841  662830 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 01:15:06.395866  662830 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-799707 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 01:15:06.492024  662830 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:15:06.518150  662830 machine.go:88] provisioning docker machine ...
	I0223 01:15:06.518192  662830 ubuntu.go:169] provisioning hostname "old-k8s-version-799707"
	I0223 01:15:06.518262  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:06.544145  662830 main.go:141] libmachine: Using SSH client type: native
	I0223 01:15:06.544454  662830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I0223 01:15:06.544478  662830 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-799707 && echo "old-k8s-version-799707" | sudo tee /etc/hostname
	I0223 01:15:06.701364  662830 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-799707
	
	I0223 01:15:06.701469  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:06.720494  662830 main.go:141] libmachine: Using SSH client type: native
	I0223 01:15:06.720714  662830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I0223 01:15:06.720743  662830 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-799707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-799707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-799707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:15:06.857809  662830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:15:06.857843  662830 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:15:06.857893  662830 ubuntu.go:177] setting up certificates
	I0223 01:15:06.857911  662830 provision.go:83] configureAuth start
	I0223 01:15:06.857973  662830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:15:06.874371  662830 provision.go:138] copyHostCerts
	I0223 01:15:06.874427  662830 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:15:06.874436  662830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:15:06.874498  662830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:15:06.874588  662830 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:15:06.874599  662830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:15:06.874622  662830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:15:06.874706  662830 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:15:06.874717  662830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:15:06.874748  662830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:15:06.874812  662830 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-799707 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-799707]
	I0223 01:15:06.953695  662830 provision.go:172] copyRemoteCerts
	I0223 01:15:06.953770  662830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:15:06.953819  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:06.975670  662830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:15:07.075894  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:15:07.100902  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 01:15:07.123753  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:15:07.145802  662830 provision.go:86] duration metric: configureAuth took 287.870729ms
	I0223 01:15:07.145841  662830 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:15:07.146006  662830 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:15:07.146090  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:07.163391  662830 main.go:141] libmachine: Using SSH client type: native
	I0223 01:15:07.163586  662830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I0223 01:15:07.163619  662830 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:15:07.298676  662830 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:15:07.298702  662830 ubuntu.go:71] root file system type: overlay
	I0223 01:15:07.298836  662830 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:15:07.298900  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:07.319323  662830 main.go:141] libmachine: Using SSH client type: native
	I0223 01:15:07.319598  662830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I0223 01:15:07.319710  662830 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:15:07.465805  662830 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:15:07.465915  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:07.483319  662830 main.go:141] libmachine: Using SSH client type: native
	I0223 01:15:07.483559  662830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I0223 01:15:07.483586  662830 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:15:08.223837  662830 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-23 01:15:07.458899994 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 01:15:08.223873  662830 machine.go:91] provisioned docker machine in 1.705695788s
	I0223 01:15:08.223883  662830 client.go:171] LocalClient.Create took 11.253775841s
	I0223 01:15:08.223902  662830 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-799707" took 11.253862282s
	I0223 01:15:08.223916  662830 start.go:300] post-start starting for "old-k8s-version-799707" (driver="docker")
	I0223 01:15:08.223926  662830 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:15:08.223978  662830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:15:08.224013  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:08.240944  662830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:15:08.334982  662830 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:15:08.338100  662830 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:15:08.338131  662830 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:15:08.338140  662830 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:15:08.338147  662830 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:15:08.338161  662830 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:15:08.338209  662830 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:15:08.338284  662830 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:15:08.338380  662830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:15:08.346102  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:15:08.367148  662830 start.go:303] post-start completed in 143.218996ms
	I0223 01:15:08.367490  662830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:15:08.385271  662830 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:15:08.385589  662830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:15:08.385643  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:08.402182  662830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:15:08.490731  662830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:15:08.494805  662830 start.go:128] duration metric: createHost completed in 11.531090105s
	I0223 01:15:08.494831  662830 start.go:83] releasing machines lock for "old-k8s-version-799707", held for 11.531304892s
	I0223 01:15:08.494921  662830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:15:08.510988  662830 ssh_runner.go:195] Run: cat /version.json
	I0223 01:15:08.511051  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:08.511052  662830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:15:08.511129  662830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:15:08.529648  662830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:15:08.529928  662830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:15:08.707198  662830 ssh_runner.go:195] Run: systemctl --version
	I0223 01:15:08.711459  662830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 01:15:08.715389  662830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 01:15:08.737393  662830 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 01:15:08.737508  662830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:15:08.752300  662830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:15:08.767741  662830 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 01:15:08.767777  662830 start.go:475] detecting cgroup driver to use...
	I0223 01:15:08.767815  662830 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:15:08.767992  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:15:08.783639  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 01:15:08.793147  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:15:08.804013  662830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:15:08.804074  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:15:08.813466  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:15:08.822714  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:15:08.831548  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:15:08.840005  662830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:15:08.847827  662830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:15:08.856158  662830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:15:08.863600  662830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:15:08.870838  662830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:15:08.937270  662830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:15:09.073929  662830 start.go:475] detecting cgroup driver to use...
	I0223 01:15:09.073986  662830 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:15:09.074076  662830 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:15:09.112087  662830 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:15:09.112150  662830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:15:09.123788  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:15:09.140597  662830 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:15:09.143994  662830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:15:09.152511  662830 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:15:09.182762  662830 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:15:09.256843  662830 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:15:09.370622  662830 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:15:09.370817  662830 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:15:09.389613  662830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:15:09.466515  662830 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:15:09.853163  662830 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:15:09.882041  662830 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:15:09.910231  662830 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0223 01:15:09.910319  662830 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:15:09.932561  662830 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0223 01:15:09.936890  662830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:15:09.948140  662830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:15:09.948192  662830 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:15:09.970748  662830 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:15:09.970773  662830 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:15:09.970826  662830 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:15:09.981424  662830 ssh_runner.go:195] Run: which lz4
	I0223 01:15:09.984840  662830 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0223 01:15:09.988470  662830 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 01:15:09.988499  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0223 01:15:10.924525  662830 docker.go:649] Took 0.939727 seconds to copy over tarball
	I0223 01:15:10.924605  662830 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 01:15:13.443394  662830 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.518754917s)
	I0223 01:15:13.443428  662830 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 01:15:13.502931  662830 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:15:13.511214  662830 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0223 01:15:13.527242  662830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:15:13.601728  662830 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:15:14.339608  662830 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:15:14.361618  662830 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:15:14.361641  662830 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:15:14.361653  662830 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 01:15:14.363181  662830 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:15:14.363468  662830 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:15:14.363506  662830 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:15:14.363555  662830 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0223 01:15:14.363598  662830 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:15:14.363657  662830 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:15:14.363689  662830 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:15:14.363728  662830 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0223 01:15:14.364142  662830 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:15:14.364381  662830 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:15:14.364443  662830 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:15:14.364517  662830 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0223 01:15:14.364673  662830 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:15:14.364686  662830 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:15:14.364797  662830 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:15:14.364920  662830 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0223 01:15:14.561683  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:15:14.561683  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0223 01:15:14.564037  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0223 01:15:14.571035  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0223 01:15:14.573935  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:15:14.598601  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:15:14.598970  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:15:14.599149  662830 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:15:14.604824  662830 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0223 01:15:14.604884  662830 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:15:14.604903  662830 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0223 01:15:14.604921  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0223 01:15:14.604976  662830 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:15:14.605017  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:15:14.679559  662830 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0223 01:15:14.679621  662830 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0223 01:15:14.679665  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0223 01:15:14.679772  662830 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0223 01:15:14.679801  662830 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:15:14.679829  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:15:14.679882  662830 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0223 01:15:14.679911  662830 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:15:14.679944  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:15:14.680035  662830 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0223 01:15:14.680078  662830 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:15:14.680120  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:15:14.680199  662830 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0223 01:15:14.680225  662830 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0223 01:15:14.680251  662830 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0223 01:15:14.680484  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0223 01:15:14.680564  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0223 01:15:14.707372  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0223 01:15:14.707417  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0223 01:15:14.716153  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0223 01:15:14.716228  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0223 01:15:14.716488  662830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0223 01:15:14.716545  662830 cache_images.go:92] LoadImages completed in 354.87672ms
	W0223 01:15:14.716632  662830 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0223 01:15:14.716705  662830 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:15:14.807172  662830 cni.go:84] Creating CNI manager for ""
	I0223 01:15:14.807193  662830 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:15:14.807211  662830 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:15:14.807227  662830 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-799707 NodeName:old-k8s-version-799707 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 01:15:14.807374  662830 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-799707"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-799707
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.94.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:15:14.807465  662830 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-799707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:15:14.807511  662830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 01:15:14.816205  662830 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:15:14.816263  662830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:15:14.860133  662830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 01:15:14.897670  662830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 01:15:14.917909  662830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 01:15:14.937050  662830 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:15:14.940699  662830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:15:14.952831  662830 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707 for IP: 192.168.94.2
	I0223 01:15:14.952881  662830 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:14.953180  662830 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:15:14.953235  662830 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:15:14.953299  662830 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key
	I0223 01:15:14.953356  662830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.crt with IP's: []
	I0223 01:15:15.345958  662830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.crt ...
	I0223 01:15:15.345993  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.crt: {Name:mke193cfa64c426bc0ba5afa11c4d2c28013bbad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.346184  662830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key ...
	I0223 01:15:15.346201  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key: {Name:mk3df5dd67d7b375296104968c6e3b5a4be9be7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.346303  662830 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a
	I0223 01:15:15.346317  662830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 01:15:15.751447  662830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt.ad8e880a ...
	I0223 01:15:15.751488  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt.ad8e880a: {Name:mk8e4b8bf5412a1da63b54b677d9b8c1d623a4a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.751720  662830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a ...
	I0223 01:15:15.751737  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a: {Name:mk045124f08a36c842462ba538d49dbfc2b8c2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.751859  662830 certs.go:337] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt
	I0223 01:15:15.751951  662830 certs.go:341] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key
	I0223 01:15:15.752017  662830 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key
	I0223 01:15:15.752042  662830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt with IP's: []
	I0223 01:15:15.996126  662830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt ...
	I0223 01:15:15.996166  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt: {Name:mk871f69f10bc41b5f295bc85ac3b1cae9c1a71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.996361  662830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key ...
	I0223 01:15:15.996380  662830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key: {Name:mk0f74328cdf485742cad6f0f077781f34161152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:15:15.996642  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:15:15.996689  662830 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:15:15.996707  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:15:15.996745  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:15:15.996777  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:15:15.996808  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:15:15.996878  662830 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:15:15.997549  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:15:16.020672  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:15:16.043235  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:15:16.064664  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 01:15:16.092606  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:15:16.114466  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:15:16.136498  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:15:16.158821  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:15:16.184715  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:15:16.206463  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:15:16.228992  662830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:15:16.254783  662830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:15:16.270850  662830 ssh_runner.go:195] Run: openssl version
	I0223 01:15:16.275820  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:15:16.284181  662830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:15:16.287717  662830 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:15:16.287779  662830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:15:16.294282  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:15:16.303486  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:15:16.313046  662830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:15:16.316579  662830 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:15:16.316639  662830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:15:16.322962  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:15:16.332026  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:15:16.340686  662830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:15:16.343904  662830 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:15:16.343967  662830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:15:16.351156  662830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:15:16.363435  662830 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:15:16.366876  662830 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0223 01:15:16.366934  662830 kubeadm.go:404] StartCluster: {Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:15:16.367049  662830 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:15:16.388038  662830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:15:16.397123  662830 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:15:16.405858  662830 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:15:16.405920  662830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:15:16.413713  662830 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:15:16.413759  662830 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:15:16.479330  662830 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:15:16.479394  662830 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:15:16.676898  662830 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:15:16.676971  662830 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:15:16.677011  662830 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:15:16.677041  662830 kubeadm.go:322] OS: Linux
	I0223 01:15:16.677079  662830 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:15:16.677120  662830 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:15:16.677159  662830 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:15:16.677200  662830 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:15:16.677247  662830 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:15:16.677298  662830 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:15:16.764811  662830 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:15:16.764957  662830 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:15:16.765086  662830 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:15:16.954902  662830 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:15:16.956631  662830 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:15:16.968887  662830 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:15:17.049621  662830 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:15:17.053037  662830 out.go:204]   - Generating certificates and keys ...
	I0223 01:15:17.053157  662830 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:15:17.053250  662830 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:15:17.525720  662830 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 01:15:17.801446  662830 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 01:15:17.904332  662830 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 01:15:18.347321  662830 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 01:15:18.437131  662830 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 01:15:18.438017  662830 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0223 01:15:18.594938  662830 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 01:15:18.595298  662830 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0223 01:15:18.787280  662830 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 01:15:19.094470  662830 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 01:15:19.539331  662830 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 01:15:19.539526  662830 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:15:19.661048  662830 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:15:20.082140  662830 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:15:20.424562  662830 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:15:20.681911  662830 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:15:20.683465  662830 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:15:20.685909  662830 out.go:204]   - Booting up control plane ...
	I0223 01:15:20.686106  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:15:20.699215  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:15:20.700996  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:15:20.702205  662830 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:15:20.705549  662830 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:16:00.705707  662830 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:19:20.707360  662830 kubeadm.go:322] 
	I0223 01:19:20.707537  662830 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:19:20.707709  662830 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:19:20.707759  662830 kubeadm.go:322] 
	I0223 01:19:20.707856  662830 kubeadm.go:322] This error is likely caused by:
	I0223 01:19:20.707932  662830 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:19:20.708189  662830 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:19:20.708219  662830 kubeadm.go:322] 
	I0223 01:19:20.708403  662830 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:19:20.708476  662830 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:19:20.708539  662830 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:19:20.708548  662830 kubeadm.go:322] 
	I0223 01:19:20.708676  662830 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:19:20.708814  662830 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:19:20.708926  662830 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:19:20.708997  662830 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:19:20.709105  662830 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:19:20.709167  662830 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:19:20.711478  662830 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:19:20.711608  662830 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:19:20.711849  662830 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:19:20.711971  662830 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:19:20.712089  662830 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:19:20.712225  662830 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 01:19:20.712428  662830 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-799707 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 01:19:20.712518  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:19:21.476427  662830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:19:21.487973  662830 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:19:21.488031  662830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:19:21.496281  662830 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:19:21.496330  662830 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:19:21.542866  662830 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:19:21.542965  662830 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:19:21.718099  662830 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:19:21.718190  662830 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:19:21.718250  662830 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:19:21.718298  662830 kubeadm.go:322] OS: Linux
	I0223 01:19:21.718351  662830 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:19:21.718412  662830 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:19:21.718480  662830 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:19:21.718535  662830 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:19:21.718599  662830 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:19:21.718661  662830 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:19:21.788286  662830 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:19:21.788427  662830 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:19:21.788545  662830 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:19:21.964469  662830 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:19:21.965592  662830 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:19:21.972954  662830 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:19:22.055594  662830 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:19:22.057674  662830 out.go:204]   - Generating certificates and keys ...
	I0223 01:19:22.057783  662830 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:19:22.057869  662830 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:19:22.058016  662830 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:19:22.058141  662830 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:19:22.058237  662830 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:19:22.058314  662830 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:19:22.058419  662830 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:19:22.058501  662830 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:19:22.058573  662830 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:19:22.058898  662830 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:19:22.058965  662830 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:19:22.059040  662830 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:19:22.240393  662830 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:19:22.397081  662830 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:19:22.721666  662830 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:19:22.857798  662830 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:19:22.858680  662830 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:19:22.861014  662830 out.go:204]   - Booting up control plane ...
	I0223 01:19:22.861132  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:19:22.864069  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:19:22.865223  662830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:19:22.865964  662830 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:19:22.870144  662830 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:20:02.870361  662830 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:23:22.871767  662830 kubeadm.go:322] 
	I0223 01:23:22.872047  662830 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:23:22.872151  662830 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:23:22.872169  662830 kubeadm.go:322] 
	I0223 01:23:22.872236  662830 kubeadm.go:322] This error is likely caused by:
	I0223 01:23:22.872314  662830 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:23:22.872550  662830 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:23:22.872565  662830 kubeadm.go:322] 
	I0223 01:23:22.872786  662830 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:23:22.872856  662830 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:23:22.872926  662830 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:23:22.872937  662830 kubeadm.go:322] 
	I0223 01:23:22.873164  662830 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:23:22.873369  662830 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:23:22.873555  662830 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:23:22.873661  662830 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:23:22.873827  662830 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:23:22.873898  662830 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:23:22.875441  662830 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:23:22.875642  662830 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:23:22.875896  662830 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:23:22.876054  662830 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:23:22.876160  662830 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:23:22.876234  662830 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 01:23:22.876324  662830 kubeadm.go:406] StartCluster complete in 8m6.509401758s
	I0223 01:23:22.876419  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:23:22.895590  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.895614  662830 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:23:22.895660  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:23:22.913998  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.914022  662830 logs.go:278] No container was found matching "etcd"
	I0223 01:23:22.914096  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:23:22.931699  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.931729  662830 logs.go:278] No container was found matching "coredns"
	I0223 01:23:22.931772  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:23:22.948413  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.948445  662830 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:23:22.948502  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:23:22.967459  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.967488  662830 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:23:22.967543  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:23:22.984821  662830 logs.go:276] 0 containers: []
	W0223 01:23:22.984847  662830 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:23:22.984909  662830 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:23:23.001713  662830 logs.go:276] 0 containers: []
	W0223 01:23:23.001741  662830 logs.go:278] No container was found matching "kindnet"
	I0223 01:23:23.001755  662830 logs.go:123] Gathering logs for container status ...
	I0223 01:23:23.001771  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:23:23.038578  662830 logs.go:123] Gathering logs for kubelet ...
	I0223 01:23:23.038605  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:23:23.062489  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:02 old-k8s-version-799707 kubelet[5695]: E0223 01:23:02.555197    5695 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:23:23.068346  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:05 old-k8s-version-799707 kubelet[5695]: E0223 01:23:05.555141    5695 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:23:23.070676  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:06 old-k8s-version-799707 kubelet[5695]: E0223 01:23:06.555495    5695 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:23:23.081716  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:12 old-k8s-version-799707 kubelet[5695]: E0223 01:23:12.560904    5695 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:23:23.090316  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:17 old-k8s-version-799707 kubelet[5695]: E0223 01:23:17.558269    5695 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:23:23.090697  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:17 old-k8s-version-799707 kubelet[5695]: E0223 01:23:17.559376    5695 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:23:23.096279  662830 logs.go:138] Found kubelet problem: Feb 23 01:23:20 old-k8s-version-799707 kubelet[5695]: E0223 01:23:20.555521    5695 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:23:23.100032  662830 logs.go:123] Gathering logs for dmesg ...
	I0223 01:23:23.100052  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:23:23.125369  662830 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:23:23.125399  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:23:23.184890  662830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:23:23.184915  662830 logs.go:123] Gathering logs for Docker ...
	I0223 01:23:23.184932  662830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0223 01:23:23.205836  662830 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 01:23:23.205882  662830 out.go:239] * 
	* 
	W0223 01:23:23.205947  662830 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:23:23.205982  662830 out.go:239] * 
	* 
	W0223 01:23:23.206941  662830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 01:23:23.209470  662830 out.go:177] X Problems detected in kubelet:
	I0223 01:23:23.210676  662830 out.go:177]   Feb 23 01:23:02 old-k8s-version-799707 kubelet[5695]: E0223 01:23:02.555197    5695 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:23:23.212104  662830 out.go:177]   Feb 23 01:23:05 old-k8s-version-799707 kubelet[5695]: E0223 01:23:05.555141    5695 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:23:23.213506  662830 out.go:177]   Feb 23 01:23:06 old-k8s-version-799707 kubelet[5695]: E0223 01:23:06.555495    5695 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:23:23.216196  662830 out.go:177] 
	W0223 01:23:23.217519  662830 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:23:23.217573  662830 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 01:23:23.217602  662830 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 01:23:23.219109  662830 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:15:05.794433523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5c1af4776bf69cc328dfffdf7107f9822a0ea56bd95b5a12ee20bbcffb22663",
	            "SandboxKey": "/var/run/docker/netns/b5c1af4776bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "53bcfb6557fc1d2287655cd3e4d1b7970b1c6c19f1eaa786648cd8dc9931ac6c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 6 (302.784286ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:23:23.581412  760881 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-799707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (506.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-799707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-799707 create -f testdata/busybox.yaml: exit status 1 (48.85705ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-799707" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-799707 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:15:05.794433523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5c1af4776bf69cc328dfffdf7107f9822a0ea56bd95b5a12ee20bbcffb22663",
	            "SandboxKey": "/var/run/docker/netns/b5c1af4776bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "53bcfb6557fc1d2287655cd3e4d1b7970b1c6c19f1eaa786648cd8dc9931ac6c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 6 (284.771045ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:23:23.933747  761007 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-799707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:15:05.794433523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5c1af4776bf69cc328dfffdf7107f9822a0ea56bd95b5a12ee20bbcffb22663",
	            "SandboxKey": "/var/run/docker/netns/b5c1af4776bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "53bcfb6557fc1d2287655cd3e4d1b7970b1c6c19f1eaa786648cd8dc9931ac6c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 6 (278.934121ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:23:24.231778  761109 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-799707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-799707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 01:23:30.083423  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:23:35.079741  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 01:23:41.243874  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:24:15.086923  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 01:24:21.125117  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:24:26.322413  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:24:40.456520  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-799707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m20.949110582s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-799707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-799707 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-799707 describe deploy/metrics-server -n kube-system: exit status 1 (47.085962ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-799707" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-799707 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:15:05.794433523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5c1af4776bf69cc328dfffdf7107f9822a0ea56bd95b5a12ee20bbcffb22663",
	            "SandboxKey": "/var/run/docker/netns/b5c1af4776bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "53bcfb6557fc1d2287655cd3e4d1b7970b1c6c19f1eaa786648cd8dc9931ac6c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 6 (298.587971ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:24:45.544875  763629 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-799707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (757.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0223 01:24:48.809037  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:24:54.006403  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:25:08.139298  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:25:46.242003  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:25:47.665677  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:26:01.514874  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.520148  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.530353  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.550625  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.590936  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.671799  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:01.832299  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:02.153233  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:02.793569  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:04.074011  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:05.271293  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:26:06.634849  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:11.755264  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
E0223 01:26:13.924355  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:26:21.996423  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (12m35.592260512s)

                                                
                                                
-- stdout --
	* [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-799707" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 01:24:47.003793  764048 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:24:47.004093  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004104  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:24:47.004109  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004297  764048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:24:47.004973  764048 out.go:298] Setting JSON to false
	I0223 01:24:47.006519  764048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7636,"bootTime":1708643851,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:24:47.006586  764048 start.go:139] virtualization: kvm guest
	I0223 01:24:47.008747  764048 out.go:177] * [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:24:47.010551  764048 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:24:47.011904  764048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:24:47.010620  764048 notify.go:220] Checking for updates...
	I0223 01:24:47.014507  764048 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:47.015864  764048 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:24:47.017138  764048 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:24:47.018411  764048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:24:47.020066  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:47.021857  764048 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0223 01:24:47.023120  764048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:24:47.046565  764048 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:24:47.046673  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.099610  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.089716386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.099718  764048 docker.go:295] overlay module found
	I0223 01:24:47.101615  764048 out.go:177] * Using the docker driver based on existing profile
	I0223 01:24:47.102883  764048 start.go:299] selected driver: docker
	I0223 01:24:47.102897  764048 start.go:903] validating driver "docker" against &{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.102997  764048 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:24:47.103795  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.153625  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.144803249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.154044  764048 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 01:24:47.154166  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:47.154193  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:47.154210  764048 start_flags.go:323] config:
	{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.156027  764048 out.go:177] * Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	I0223 01:24:47.157370  764048 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:24:47.158890  764048 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:24:47.160251  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:47.160288  764048 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 01:24:47.160309  764048 cache.go:56] Caching tarball of preloaded images
	I0223 01:24:47.160343  764048 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:24:47.160431  764048 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:24:47.160444  764048 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 01:24:47.160574  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.176632  764048 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:24:47.176654  764048 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:24:47.176673  764048 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:24:47.176702  764048 start.go:365] acquiring machines lock for old-k8s-version-799707: {Name:mkec58acc477a1259ea890fef71c8d064abcdc6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:24:47.176766  764048 start.go:369] acquired machines lock for "old-k8s-version-799707" in 43.242µs
	I0223 01:24:47.176791  764048 start.go:96] Skipping create...Using existing machine configuration
	I0223 01:24:47.176797  764048 fix.go:54] fixHost starting: 
	I0223 01:24:47.177008  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.192721  764048 fix.go:102] recreateIfNeeded on old-k8s-version-799707: state=Stopped err=<nil>
	W0223 01:24:47.192746  764048 fix.go:128] unexpected machine state, will restart: <nil>
	I0223 01:24:47.194605  764048 out.go:177] * Restarting existing docker container for "old-k8s-version-799707" ...
	I0223 01:24:47.195889  764048 cli_runner.go:164] Run: docker start old-k8s-version-799707
	I0223 01:24:47.452279  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.471747  764048 kic.go:430] container "old-k8s-version-799707" state is running.
	I0223 01:24:47.472285  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:47.489570  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.489761  764048 machine.go:88] provisioning docker machine ...
	I0223 01:24:47.489782  764048 ubuntu.go:169] provisioning hostname "old-k8s-version-799707"
	I0223 01:24:47.489818  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:47.506471  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:47.506715  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:47.506741  764048 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-799707 && echo "old-k8s-version-799707" | sudo tee /etc/hostname
	I0223 01:24:47.507401  764048 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40278->127.0.0.1:33414: read: connection reset by peer
	I0223 01:24:50.649171  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-799707
	
	I0223 01:24:50.649264  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.668220  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:50.668659  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:50.668690  764048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-799707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-799707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-799707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:24:50.798415  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:50.798446  764048 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:24:50.798504  764048 ubuntu.go:177] setting up certificates
	I0223 01:24:50.798521  764048 provision.go:83] configureAuth start
	I0223 01:24:50.798581  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:50.815373  764048 provision.go:138] copyHostCerts
	I0223 01:24:50.815447  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:24:50.815464  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:24:50.815542  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:24:50.815649  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:24:50.815662  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:24:50.815698  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:24:50.815828  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:24:50.815845  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:24:50.815883  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:24:50.815954  764048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-799707 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-799707]
	I0223 01:24:50.956162  764048 provision.go:172] copyRemoteCerts
	I0223 01:24:50.956237  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:24:50.956294  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.973887  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.066745  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:24:51.088783  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 01:24:51.114161  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:24:51.136302  764048 provision.go:86] duration metric: configureAuth took 337.765346ms
	I0223 01:24:51.136338  764048 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:24:51.136542  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:51.136603  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.153110  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.153343  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.153360  764048 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:24:51.282447  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:24:51.282475  764048 ubuntu.go:71] root file system type: overlay
	I0223 01:24:51.282624  764048 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:24:51.282692  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.300243  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.300450  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.300510  764048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:24:51.445956  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:24:51.446035  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.464137  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.464317  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.464339  764048 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:24:51.599209  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:51.599236  764048 machine.go:91] provisioned docker machine in 4.109460251s
	I0223 01:24:51.599249  764048 start.go:300] post-start starting for "old-k8s-version-799707" (driver="docker")
	I0223 01:24:51.599259  764048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:24:51.599311  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:24:51.599368  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.617077  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.714796  764048 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:24:51.717878  764048 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:24:51.717913  764048 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:24:51.717926  764048 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:24:51.717935  764048 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:24:51.717949  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:24:51.718015  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:24:51.718126  764048 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:24:51.718238  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:24:51.726135  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:51.747990  764048 start.go:303] post-start completed in 148.727396ms
	I0223 01:24:51.748091  764048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:24:51.748133  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.764872  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.854725  764048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:24:51.858894  764048 fix.go:56] fixHost completed within 4.682089908s
	I0223 01:24:51.858929  764048 start.go:83] releasing machines lock for "old-k8s-version-799707", held for 4.682151168s
	I0223 01:24:51.858987  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:51.875113  764048 ssh_runner.go:195] Run: cat /version.json
	I0223 01:24:51.875169  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.875222  764048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:24:51.875284  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.892186  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.892603  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.981915  764048 ssh_runner.go:195] Run: systemctl --version
	I0223 01:24:52.071583  764048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 01:24:52.076094  764048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 01:24:52.076150  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:24:52.084570  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:24:52.093490  764048 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 01:24:52.093526  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.093556  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.093683  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.109388  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 01:24:52.119408  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:24:52.128541  764048 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.128617  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:24:52.138147  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.148648  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:24:52.157740  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.166291  764048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:24:52.174294  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:24:52.182560  764048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:24:52.191707  764048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:24:52.199478  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.279573  764048 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:24:52.364794  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.364849  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.364907  764048 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:24:52.378283  764048 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:24:52.378357  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:24:52.390249  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.407123  764048 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:24:52.410703  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:24:52.419413  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:24:52.436969  764048 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:24:52.538363  764048 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:24:52.641674  764048 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.641801  764048 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:24:52.672699  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.752635  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:53.005432  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:53.028950  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:53.053932  764048 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0223 01:24:53.054034  764048 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:24:53.069369  764048 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0223 01:24:53.072991  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:53.082986  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:53.083031  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:53.101057  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:53.101079  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:53.101131  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:53.109330  764048 ssh_runner.go:195] Run: which lz4
	I0223 01:24:53.112468  764048 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0223 01:24:53.115371  764048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 01:24:53.115398  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0223 01:24:53.900009  764048 docker.go:649] Took 0.787557 seconds to copy over tarball
	I0223 01:24:53.900101  764048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 01:24:55.917765  764048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.017627982s)
	I0223 01:24:55.917798  764048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 01:24:55.986783  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:55.995174  764048 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0223 01:24:56.012678  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:56.093644  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:58.619686  764048 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.525997554s)
	I0223 01:24:58.619778  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:58.638743  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:58.638772  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:58.638784  764048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 01:24:58.640360  764048 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0223 01:24:58.640468  764048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.640607  764048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.640677  764048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.640855  764048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.640978  764048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.641912  764048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.642118  764048 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.642279  764048 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0223 01:24:58.642467  764048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.642541  764048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.642661  764048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.642840  764048 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.643303  764048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.643387  764048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.643504  764048 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.801512  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0223 01:24:58.810449  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.822313  764048 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0223 01:24:58.822362  764048 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0223 01:24:58.822407  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0223 01:24:58.828135  764048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0223 01:24:58.828187  764048 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.828232  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.832726  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.841318  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0223 01:24:58.843598  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.845024  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.847494  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0223 01:24:58.863715  764048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0223 01:24:58.863770  764048 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863800  764048 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0223 01:24:58.863817  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863840  764048 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.863881  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.877108  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.881932  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0223 01:24:58.883024  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0223 01:24:58.887720  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.888992  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.896469  764048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0223 01:24:58.896520  764048 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.896568  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.909663  764048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0223 01:24:58.909718  764048 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909761  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909764  764048 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0223 01:24:58.909801  764048 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.909863  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.915957  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0223 01:24:58.930358  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0223 01:24:58.930531  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0223 01:24:58.930584  764048 cache_images.go:92] LoadImages completed in 291.787416ms
	W0223 01:24:58.930662  764048 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0223 01:24:58.930711  764048 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:24:59.002793  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:59.002825  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:59.002849  764048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:24:59.002873  764048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-799707 NodeName:old-k8s-version-799707 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 01:24:59.003021  764048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-799707"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-799707
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.94.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:24:59.003101  764048 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-799707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:24:59.003150  764048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 01:24:59.011882  764048 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:24:59.011955  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:24:59.020226  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 01:24:59.036352  764048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 01:24:59.052765  764048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 01:24:59.068716  764048 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:24:59.071794  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:59.081516  764048 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707 for IP: 192.168.94.2
	I0223 01:24:59.081554  764048 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.081720  764048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:24:59.081765  764048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:24:59.081865  764048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key
	I0223 01:24:59.081931  764048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a
	I0223 01:24:59.081989  764048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key
	I0223 01:24:59.082135  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:24:59.082182  764048 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:24:59.082205  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:24:59.082240  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:24:59.082275  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:24:59.082304  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:24:59.082383  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:59.083221  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:24:59.105664  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:24:59.127530  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:24:59.149110  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 01:24:59.171812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:24:59.194479  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:24:59.215613  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:24:59.236896  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:24:59.258380  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:24:59.280812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:24:59.303146  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:24:59.325675  764048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:24:59.342098  764048 ssh_runner.go:195] Run: openssl version
	I0223 01:24:59.347196  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:24:59.355998  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359380  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359434  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.366000  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:24:59.373883  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:24:59.383550  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386803  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386851  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.393159  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:24:59.401114  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:24:59.410493  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413720  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413769  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.419835  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:24:59.428503  764048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:24:59.431930  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0223 01:24:59.438516  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0223 01:24:59.444802  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0223 01:24:59.451032  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0223 01:24:59.457355  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0223 01:24:59.463364  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0223 01:24:59.469151  764048 kubeadm.go:404] StartCluster: {Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:59.469277  764048 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:24:59.486256  764048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:24:59.494482  764048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0223 01:24:59.494549  764048 kubeadm.go:636] restartCluster start
	I0223 01:24:59.494602  764048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 01:24:59.502465  764048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:24:59.503492  764048 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:59.504158  764048 kubeconfig.go:146] "old-k8s-version-799707" context is missing from /home/jenkins/minikube-integration/18233-317564/kubeconfig - will repair!
	I0223 01:24:59.505058  764048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.506938  764048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 01:24:59.515443  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:24:59.515508  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:24:59.525625  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.016225  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.016379  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.026296  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.515803  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.515913  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.526179  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.015710  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.015775  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.026278  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.515779  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.515870  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.526346  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:02.016199  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.026597  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:02.516181  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.516275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.526556  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.016094  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.016199  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.026612  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.516213  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.516295  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.527347  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.015853  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.015934  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.025845  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.516436  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.516520  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.526628  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.016168  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.016238  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.026961  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.515470  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.515565  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.525559  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.016173  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.027029  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.515495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.515612  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.525687  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:07.015495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.015568  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.026678  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:07.516253  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.516337  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.526391  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.015899  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.015968  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.025911  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.516098  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.516167  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.526981  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.016463  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.016557  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.029165  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.516495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.516648  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.526971  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.527005  764048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0223 01:25:09.527018  764048 kubeadm.go:1135] stopping kube-system containers ...
	I0223 01:25:09.527081  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:25:09.546502  764048 docker.go:483] Stopping containers: [b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53]
	I0223 01:25:09.546580  764048 ssh_runner.go:195] Run: docker stop b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53
	I0223 01:25:09.563682  764048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 01:25:09.576338  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:25:09.584800  764048 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 23 01:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 23 01:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 23 01:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 23 01:19 /etc/kubernetes/scheduler.conf
	
	I0223 01:25:09.584871  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 01:25:09.593154  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 01:25:09.601622  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 01:25:09.610963  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 01:25:09.618978  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627191  764048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627226  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:09.680140  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.770745  764048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090560392s)
	I0223 01:25:10.770787  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.976122  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.038904  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.126325  764048 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:25:11.126417  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:11.626797  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:12.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:12.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.127338  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.127311  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.627274  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.126534  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.627263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.627307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:17.127218  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:17.627134  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.127282  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.626855  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.127245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.627466  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.627329  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.127325  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.627266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:22.127189  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:22.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.126825  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.627188  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.126739  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.627267  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.127304  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.126891  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.626687  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:27.126498  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:27.626585  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.127312  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.627479  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.127263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.127252  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.627251  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:32.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:32.627298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.127260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.627313  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.126749  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.626911  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.127303  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.626713  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.127324  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.626786  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:37.126523  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:37.627410  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.127109  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.126994  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.626468  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.127319  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.626871  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:42.127062  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:42.627285  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.127532  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.627370  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.127314  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.627262  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.127476  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:47.126569  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:47.627291  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.126638  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.627296  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.126978  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.127306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.626690  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.126800  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.627229  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:52.127250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:52.627255  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.127231  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.127330  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.627261  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.127327  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.627272  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.127307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.626853  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:57.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:57.627271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.127321  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.627294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.126813  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.627059  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.127271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.627113  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.127202  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.626495  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:02.126951  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:02.627276  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.127241  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.627284  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.127323  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.626588  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.126876  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.627245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.127301  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.626519  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:07.127217  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:07.627286  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.126680  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.626774  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.127378  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.627060  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.126842  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.626792  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:11.126910  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:11.145763  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.145788  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:11.145831  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:11.165136  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.165170  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:11.165223  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:11.182783  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.182815  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:11.182870  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:11.200040  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.200505  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:11.200588  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:11.219336  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.219369  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:11.219481  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:11.236888  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.236916  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:11.236979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:11.255241  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.255276  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:11.255349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:11.273587  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.273613  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:11.273625  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:11.273645  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:11.301874  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:11.301911  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:11.367953  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:11.367981  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:11.367999  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:11.384915  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:11.384948  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:11.423686  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:11.423719  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:11.443811  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:50 old-k8s-version-799707 kubelet[1655]: E0223 01:25:50.226291    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.446025  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:51 old-k8s-version-799707 kubelet[1655]: E0223 01:25:51.225450    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.448865  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.225297    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.449155  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.468475  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.468779  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.472671  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.475027  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483450  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483476  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:11.483545  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:26:11.483556  764048 out.go:239]   Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.483563  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.483646  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.483662  764048 out.go:239]   Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.483695  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483707  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483716  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:21.485306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:21.496387  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:21.514732  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.514762  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:21.514826  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:21.532743  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.532769  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:21.532815  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:21.550131  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.550159  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:21.550217  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:21.567723  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.567752  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:21.567810  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:21.586824  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.586864  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:21.586931  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:21.605250  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.605278  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:21.605328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:21.623380  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.623417  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:21.623494  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:21.641554  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.641579  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:21.641593  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:21.641610  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:21.670812  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.671137  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.674833  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:21.677109  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.691648  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.695439  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.695961  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.697979  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.703403  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:21.703431  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:21.730898  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:21.730932  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:21.792948  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:21.792972  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:21.792988  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:21.810167  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:21.810200  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:21.847886  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.847911  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:21.847973  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:26:21.847988  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.847997  764048 out.go:239]   Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.848014  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.848024  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.848034  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.848046  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.848068  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:31.849099  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:31.860777  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:31.880217  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.880249  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:31.880321  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:31.900070  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.900104  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:31.900177  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:31.924832  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.924871  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:31.924926  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:31.943201  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.943233  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:31.943293  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:31.963632  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.963659  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:31.963718  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:31.981603  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.981631  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:31.981687  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:31.999354  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.999385  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:31.999443  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:32.017697  764048 logs.go:276] 0 containers: []
	W0223 01:26:32.017726  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:32.017740  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:32.017757  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:32.045068  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.048789  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.049261  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:32.051257  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.064250  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.070945  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.073222  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.073688  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.075053  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:32.075076  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:32.101810  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:32.101851  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:32.162373  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:32.162404  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:32.162421  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:32.179945  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:32.179980  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:32.216971  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217002  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:32.217070  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:26:32.217085  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217101  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.217112  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217130  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.217144  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.217159  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217167  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:42.219253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:42.229496  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:42.247555  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.247587  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:42.247642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:42.265205  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.265236  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:42.265284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:42.284632  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.284661  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:42.284719  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:42.302235  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.302263  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:42.302323  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:42.319683  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.319709  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:42.319767  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:42.338672  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.338696  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:42.338741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:42.356628  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.356654  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:42.356705  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:42.374290  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.374319  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:42.374334  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:42.374348  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:42.408608  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.415731  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.418148  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.418679  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.435726  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.437740  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:42.437760  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:42.465460  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:42.465489  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:42.524278  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:42.524299  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:42.524312  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:42.540348  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:42.540377  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:42.578403  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578438  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:42.578496  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:26:42.578507  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.578531  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.578546  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.578551  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.578559  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.578573  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578581  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:52.580305  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:52.590732  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:52.607693  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.607725  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:52.607771  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:52.624842  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.624873  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:52.624922  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:52.642827  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.642852  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:52.642899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:52.660436  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.660462  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:52.660517  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:52.677507  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.677544  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:52.677610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:52.694555  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.694587  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:52.694642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:52.712215  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.712248  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:52.712299  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:52.729809  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.729833  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:52.729844  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:52.729857  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:52.748858  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:52.751124  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.752064  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.769290  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.772963  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.775019  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.777255  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.788895  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:52.788921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:52.815781  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:52.815820  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:52.875541  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:52.875571  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:52.875587  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:52.897948  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:52.897975  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:52.932891  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.932917  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:52.933044  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:26:52.933066  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933075  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.933087  764048 out.go:239]   Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.933099  764048 out.go:239]   Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933108  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.933117  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.933127  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:02.934253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:02.945035  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:02.964813  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.964846  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:02.964914  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:02.985554  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.985586  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:02.985643  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:03.003541  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.003573  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:03.003636  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:03.023214  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.023240  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:03.023296  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:03.043054  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.043085  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:03.043148  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:03.061854  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.061886  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:03.061941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:03.081342  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.081374  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:03.081428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:03.100486  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.100514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:03.100528  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:03.100545  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:03.121342  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.125184  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:03.127641  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.130747  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.145918  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.152913  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.153303  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.157613  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.166434  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:03.166466  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:03.196885  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:03.196921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:03.265084  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:03.265110  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:03.265124  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:03.282530  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:03.282564  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:03.321418  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321443  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:03.321514  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:03.321527  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321540  764048 out.go:239]   Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.321554  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.321563  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321573  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.321582  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321593  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:13.323129  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:13.333740  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:13.351749  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.351777  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:13.351843  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:13.369194  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.369219  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:13.369271  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:13.386603  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.386629  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:13.386698  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:13.404358  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.404389  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:13.404450  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:13.422585  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.422613  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:13.422674  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:13.440278  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.440309  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:13.440358  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:13.459814  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.459846  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:13.459901  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:13.477486  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.477514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:13.477529  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:13.477546  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:13.502463  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:13.502498  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:13.567760  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:13.567784  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:13.567802  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:13.586261  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:13.586292  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:13.630660  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:13.630698  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:13.653846  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.660373  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.660749  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.664562  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.679481  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.680005  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.683875  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.689235  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691661  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691680  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:13.691742  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:13.691759  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.691770  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.691778  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.691784  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.691792  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691801  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691811  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:23.692473  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:23.703266  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:23.722231  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.722260  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:23.722328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:23.740592  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.740625  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:23.740691  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:23.759630  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.759655  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:23.759701  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:23.777152  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.777182  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:23.777252  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:23.794715  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.794746  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:23.794812  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:23.812469  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.812494  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:23.812554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:23.830330  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.830357  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:23.830409  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:23.847767  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.847791  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:23.847802  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:23.847813  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:23.864330  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:23.864362  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:23.900552  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:23.900582  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:23.935656  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.936227  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.940498  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:23.946760  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:23.957938  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.965312  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.967639  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:23.968659  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:23.968676  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:23.995207  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:23.995243  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:24.054134  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:24.054163  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054186  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:24.054242  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:24.054257  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:24.054269  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:24.054280  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:24.054294  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:24.054309  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:24.054321  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054329  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:34.056179  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:34.068644  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:34.091576  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.091606  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:34.091662  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:34.112999  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.113029  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:34.113083  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:34.135911  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.135948  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:34.136009  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:34.155552  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.155584  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:34.155639  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:34.172644  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.172674  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:34.172731  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:34.193231  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.193261  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:34.193318  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:34.213564  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.213587  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:34.213632  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:34.234247  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.234274  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:34.234288  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:34.234304  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:34.254068  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:34.254102  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:34.294146  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:34.294180  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:34.318533  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.329296  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.339920  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.343536  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.350850  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.358682  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.367366  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:34.367396  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:34.403850  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:34.403915  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:34.479101  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:34.479131  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479144  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:34.479211  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:34.479227  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.479247  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.479266  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.479275  764048 out.go:239]   Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.479284  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.479292  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479304  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:44.481194  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:44.492741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:44.510893  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.510919  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:44.510979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:44.528074  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.528099  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:44.528147  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:44.545615  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.545650  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:44.545711  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:44.562131  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.562157  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:44.562216  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:44.579943  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.579968  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:44.580032  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:44.597379  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.597405  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:44.597469  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:44.614583  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.614645  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:44.614736  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:44.632117  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.632153  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:44.632167  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:44.632182  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:44.649949  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.652196  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.657147  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.664845  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.673447  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.677423  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.682830  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.686738  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.690877  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:44.690909  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:44.719106  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:44.719147  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:44.778079  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:44.778107  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:44.778126  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:44.794656  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:44.794686  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:44.831247  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831275  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:44.831339  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:44.831351  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.831360  764048 out.go:239]   Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.831371  764048 out.go:239]   Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.831379  764048 out.go:239]   Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.831390  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.831397  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831405  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:54.832552  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:54.843379  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:54.861974  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.862004  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:54.862082  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:54.880013  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.880054  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:54.880110  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:54.896746  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.896776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:54.896846  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:54.913796  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.913826  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:54.913899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:54.931897  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.931928  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:54.931988  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:54.949435  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.949468  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:54.949534  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:54.966362  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.966386  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:54.966431  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:54.983954  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.983982  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:54.983995  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:54.984011  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:54.999879  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:54.999907  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:55.037126  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:55.037156  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:55.059470  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.063298  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.068690  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:55.072516  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.081122  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.090028  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.092291  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.092793  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.095603  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:55.095626  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:55.123414  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:55.123451  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:55.179936  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:55.179960  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.179971  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:55.180020  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:27:55.180032  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180039  764048 out.go:239]   Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.180072  764048 out.go:239]   Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.180086  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180105  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.180114  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.180124  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:05.181993  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:05.192424  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:05.210121  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.210156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:05.210200  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:05.228650  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.228675  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:05.228723  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:05.245884  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.245913  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:05.245979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:05.262993  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.263028  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:05.263088  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:05.280340  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.280371  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:05.280435  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:05.297947  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.297970  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:05.298018  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:05.315334  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.315366  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:05.315425  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:05.332647  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.332671  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:05.332681  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:05.332694  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:05.348614  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:05.348642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:05.384048  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:05.384079  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:05.402702  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.411066  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.419595  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.421739  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.422302  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.430697  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.440486  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.442750  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.443073  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:05.443095  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:05.468968  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:05.469004  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:05.527294  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:05.527344  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527358  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:05.527423  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:05.527440  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.527456  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.527471  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.527486  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.527501  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.527515  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527523  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:15.528852  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:15.540245  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:15.557540  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.557566  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:15.557615  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:15.573753  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.573777  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:15.573835  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:15.590472  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.590500  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:15.590554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:15.608537  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.608568  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:15.608647  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:15.624845  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.624875  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:15.624930  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:15.641988  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.642016  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:15.642095  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:15.660022  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.660052  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:15.660102  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:15.677241  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.677266  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:15.677277  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:15.677291  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:15.703651  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:15.703682  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:15.762510  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:15.762531  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:15.762544  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:15.778772  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:15.778803  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:15.815612  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:15.815642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:15.834932  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.835453  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.844214  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.854157  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.856473  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.861781  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.870466  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874488  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874509  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:15.874577  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:15.874592  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.874601  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.874613  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.874627  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.874638  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874649  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874660  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:25.876148  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:25.886833  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:25.903865  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.903895  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:25.903941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:25.921203  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.921229  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:25.921272  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:25.938748  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.938776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:25.938825  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:25.956769  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.956792  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:25.956845  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:25.973495  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.973518  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:25.973561  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:25.992272  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.992298  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:25.992349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:26.010007  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.010030  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:26.010111  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:26.027042  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.027073  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:26.027087  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:26.027103  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:26.083781  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:26.083807  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:26.083824  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:26.099963  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:26.099992  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:26.137069  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:26.137100  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:26.157617  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.159983  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.165342  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.174225  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.179523  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.185204  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.192290  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.197251  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:26.197274  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:26.222726  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222752  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:26.222806  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:26.222818  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.222824  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.222834  764048 out.go:239]   Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.222842  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.222853  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.222864  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222870  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:36.224294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:36.234593  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:36.252123  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.252147  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:36.252201  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:36.270152  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.270181  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:36.270234  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:36.286776  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.286803  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:36.286857  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:36.303407  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.303443  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:36.303500  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:36.320332  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.320360  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:36.320402  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:36.337290  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.337318  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:36.337367  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:36.356032  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.356056  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:36.356109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:36.372883  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.372909  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:36.372919  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:36.372931  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:36.388787  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:36.388825  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:36.424874  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:36.424910  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:36.445848  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.451297  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.456927  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.463893  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.471013  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.477862  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.485415  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.488865  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:36.488888  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:36.516057  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:36.516089  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:36.573623  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:36.573645  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573658  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:36.573725  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:36.573738  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.573747  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.573757  764048 out.go:239]   Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.573771  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.573783  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.573794  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573807  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:46.575225  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:46.585661  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:46.602730  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.602756  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:46.602806  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:46.620030  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.620061  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:46.620109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:46.637449  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.637478  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:46.637529  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:46.655302  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.655353  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:46.655405  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:46.672835  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.672859  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:46.672906  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:46.689042  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.689074  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:46.689128  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:46.705921  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.705949  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:46.706010  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:46.722399  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.722429  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:46.722442  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:46.722459  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:46.778773  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:46.778800  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:46.778815  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:46.794759  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:46.794791  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:46.831175  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:46.831207  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:46.858565  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.865386  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.871324  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.878984  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.881096  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.893561  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.894713  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:46.894737  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:46.920290  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920317  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:46.920373  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:46.920384  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.920391  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.920401  764048 out.go:239]   Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.920409  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.920418  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.920424  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920432  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:56.921234  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:56.932263  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:56.950133  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.950165  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:56.950211  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:56.967513  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.967544  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:56.967610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:56.985114  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.985135  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:56.985190  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:57.001619  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.001645  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:57.001690  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:57.019356  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.019381  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:57.019428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:57.036683  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.036711  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:57.036776  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:57.053460  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.053489  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:57.053536  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:57.070212  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.070240  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:57.070253  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:57.070270  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:57.096008  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:57.096044  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:57.153794  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:57.153817  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:57.153833  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:57.170295  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:57.170328  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:57.205650  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:57.205677  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:57.227302  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.234866  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.236884  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.248965  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.254557  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.254822  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.263128  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.267869  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.267897  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:57.267963  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:28:57.267977  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.267989  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.267998  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.268008  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.268018  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.268026  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.268031  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:29:07.269999  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:07.280827  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:29:07.297977  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.298005  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:29:07.298075  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:29:07.315186  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.315222  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:29:07.315276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:29:07.332204  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.332234  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:29:07.332284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:29:07.349378  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.349407  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:29:07.349461  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:29:07.366248  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.366275  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:29:07.366340  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:29:07.384205  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.384229  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:29:07.384287  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:29:07.402600  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.402625  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:29:07.402678  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:29:07.420951  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.420984  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:29:07.421000  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:29:07.421022  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:29:07.446613  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:29:07.446648  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:29:07.505820  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:29:07.505841  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:29:07.505859  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:29:07.521736  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:29:07.521819  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:29:07.559319  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:29:07.559353  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:29:07.583248  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.588793  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.589050  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.597224  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.605348  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.610814  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.612796  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619406  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619427  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:29:07.619490  764048 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0223 01:29:07.619501  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.619510  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.619519  764048 out.go:239]   Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.619526  764048 out.go:239]   Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.619535  764048 out.go:239]   Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619540  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619547  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:29:17.620865  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:17.631202  764048 kubeadm.go:640] restartCluster took 4m18.136634178s
	W0223 01:29:17.631285  764048 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0223 01:29:17.631316  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:29:18.369723  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:29:18.380597  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:29:18.389648  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:29:18.389701  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:29:18.397500  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:29:18.397542  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:29:18.444581  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:29:18.444639  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:29:18.612172  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:29:18.612306  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:29:18.612397  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:29:18.612453  764048 kubeadm.go:322] OS: Linux
	I0223 01:29:18.612523  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:29:18.612593  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:29:18.612684  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:29:18.612758  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:29:18.612840  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:29:18.612911  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:29:18.685576  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:29:18.685704  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:29:18.685805  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:29:18.862281  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:29:18.863574  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:29:18.870417  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:29:18.940701  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:29:18.943092  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:29:18.943199  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:29:18.943290  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:29:18.943424  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:29:18.943551  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:29:18.943651  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:29:18.943746  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:29:18.943837  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:29:18.943942  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:29:18.944060  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:29:18.944168  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:29:18.944239  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:29:18.944323  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:29:19.128104  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:29:19.237894  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:29:19.392875  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:29:19.789723  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:29:19.790432  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:29:19.792764  764048 out.go:204]   - Booting up control plane ...
	I0223 01:29:19.792883  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:29:19.795900  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:29:19.796833  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:29:19.797487  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:29:19.801650  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:29:59.801941  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:33:19.803128  764048 kubeadm.go:322] 
	I0223 01:33:19.803259  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:33:19.803344  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:33:19.803356  764048 kubeadm.go:322] 
	I0223 01:33:19.803405  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:33:19.803459  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:33:19.803603  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:33:19.803628  764048 kubeadm.go:322] 
	I0223 01:33:19.803738  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:33:19.803768  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:33:19.803850  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:33:19.803871  764048 kubeadm.go:322] 
	I0223 01:33:19.803995  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:33:19.804094  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:33:19.804166  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:33:19.804208  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:33:19.804275  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:33:19.804316  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:33:19.807097  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:19.807290  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:19.807529  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:19.807675  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:33:19.807772  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:33:19.807870  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 01:33:19.808072  764048 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 01:33:19.808143  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:33:20.547610  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:33:20.558373  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:33:20.558424  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:33:20.566388  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:33:20.566427  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:33:20.729151  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:20.781037  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:20.781265  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:20.850891  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:37:22.170348  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:37:22.170473  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 01:37:22.173668  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:37:22.173765  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:37:22.173849  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:37:22.173919  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:37:22.173985  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:37:22.174061  764048 kubeadm.go:322] OS: Linux
	I0223 01:37:22.174159  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:37:22.174260  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:37:22.174347  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:37:22.174416  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:37:22.174494  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:37:22.174580  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:37:22.174682  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:37:22.174824  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:37:22.174918  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:37:22.175001  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:37:22.175091  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:37:22.175146  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:37:22.175219  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:37:22.178003  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:37:22.178119  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:37:22.178193  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:37:22.178302  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:37:22.178387  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:37:22.178478  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:37:22.178552  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:37:22.178641  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:37:22.178748  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:37:22.178857  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:37:22.178961  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:37:22.179025  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:37:22.179093  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:37:22.179146  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:37:22.179223  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:37:22.179324  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:37:22.179381  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:37:22.179437  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:37:22.181274  764048 out.go:204]   - Booting up control plane ...
	I0223 01:37:22.181375  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:37:22.181453  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:37:22.181527  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:37:22.181637  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:37:22.181807  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:37:22.181876  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:37:22.181886  764048 kubeadm.go:322] 
	I0223 01:37:22.181942  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:37:22.182003  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:37:22.182013  764048 kubeadm.go:322] 
	I0223 01:37:22.182075  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:37:22.182121  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:37:22.182283  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:37:22.182302  764048 kubeadm.go:322] 
	I0223 01:37:22.182461  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:37:22.182511  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:37:22.182563  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:37:22.182575  764048 kubeadm.go:322] 
	I0223 01:37:22.182695  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:37:22.182775  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:37:22.182859  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:37:22.182908  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:37:22.183006  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:37:22.183099  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:37:22.183153  764048 kubeadm.go:406] StartCluster complete in 12m22.714008739s
	I0223 01:37:22.183276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:37:22.201132  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.201156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:37:22.201204  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:37:22.217542  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.217566  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:37:22.217616  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:37:22.234150  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.234171  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:37:22.234219  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:37:22.250946  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.250970  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:37:22.251013  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:37:22.268791  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.268815  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:37:22.268861  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:37:22.285465  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.285490  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:37:22.285540  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:37:22.300896  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.300922  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:37:22.300966  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:37:22.318198  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.318231  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:37:22.318247  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:37:22.318263  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:37:22.344168  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:37:22.344203  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:37:22.403384  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:37:22.403409  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:37:22.403422  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:37:22.420357  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:37:22.420386  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:37:22.457253  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:37:22.457281  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:37:22.486720  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.488920  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.490985  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.500879  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:12 old-k8s-version-799707 kubelet[11323]: E0223 01:37:12.661582   11323 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:37:22.507247  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:16 old-k8s-version-799707 kubelet[11323]: E0223 01:37:16.660990   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.509845  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.661645   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.509984  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.662744   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.517459  764048 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 01:37:22.517494  764048 out.go:239] * 
	* 
	W0223 01:37:22.517554  764048 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.517575  764048 out.go:239] * 
	* 
	W0223 01:37:22.518396  764048 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 01:37:22.521264  764048 out.go:177] X Problems detected in kubelet:
	I0223 01:37:22.522757  764048 out.go:177]   Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:37:22.525145  764048 out.go:177]   Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:37:22.526737  764048 out.go:177]   Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:37:22.529582  764048 out.go:177] 
	W0223 01:37:22.531019  764048 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.531067  764048 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 01:37:22.531087  764048 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 01:37:22.532677  764048 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-799707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 764330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:24:47.445426862Z",
	            "FinishedAt": "2024-02-23T01:24:45.932121046Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "495a40141205d4b737de198208cda7ff4e29ad58e3734988072fdb79c40f1629",
	            "SandboxKey": "/var/run/docker/netns/495a40141205",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "c759a40b24c96fa9e217e997f388484d82cda8c2ddb821b96919ecc179490888",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 2 (290.953479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-799707 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-643873       | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-538058             | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-538058                  | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-538058 --memory=2200 --alsologtostderr   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-538058 image list                           | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	| delete  | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	| addons  | enable metrics-server -p old-k8s-version-799707        | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-799707                              | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC | 23 Feb 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-799707             | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC | 23 Feb 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-799707                              | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-039066 image list                          | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	| delete  | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	| image   | default-k8s-diff-port-643873                           | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 01:24:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 01:24:47.003793  764048 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:24:47.004093  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004104  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:24:47.004109  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004297  764048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:24:47.004973  764048 out.go:298] Setting JSON to false
	I0223 01:24:47.006519  764048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7636,"bootTime":1708643851,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:24:47.006586  764048 start.go:139] virtualization: kvm guest
	I0223 01:24:47.008747  764048 out.go:177] * [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:24:47.010551  764048 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:24:47.011904  764048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:24:47.010620  764048 notify.go:220] Checking for updates...
	I0223 01:24:47.014507  764048 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:47.015864  764048 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:24:47.017138  764048 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:24:47.018411  764048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:24:47.020066  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:47.021857  764048 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0223 01:24:47.023120  764048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:24:47.046565  764048 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:24:47.046673  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.099610  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.089716386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.099718  764048 docker.go:295] overlay module found
	I0223 01:24:47.101615  764048 out.go:177] * Using the docker driver based on existing profile
	I0223 01:24:47.102883  764048 start.go:299] selected driver: docker
	I0223 01:24:47.102897  764048 start.go:903] validating driver "docker" against &{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.102997  764048 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:24:47.103795  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.153625  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.144803249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.154044  764048 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 01:24:47.154166  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:47.154193  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:47.154210  764048 start_flags.go:323] config:
	{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.156027  764048 out.go:177] * Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	I0223 01:24:47.157370  764048 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:24:47.158890  764048 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:24:47.160251  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:47.160288  764048 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 01:24:47.160309  764048 cache.go:56] Caching tarball of preloaded images
	I0223 01:24:47.160343  764048 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:24:47.160431  764048 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:24:47.160444  764048 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 01:24:47.160574  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.176632  764048 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:24:47.176654  764048 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:24:47.176673  764048 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:24:47.176702  764048 start.go:365] acquiring machines lock for old-k8s-version-799707: {Name:mkec58acc477a1259ea890fef71c8d064abcdc6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:24:47.176766  764048 start.go:369] acquired machines lock for "old-k8s-version-799707" in 43.242µs
	I0223 01:24:47.176791  764048 start.go:96] Skipping create...Using existing machine configuration
	I0223 01:24:47.176797  764048 fix.go:54] fixHost starting: 
	I0223 01:24:47.177008  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.192721  764048 fix.go:102] recreateIfNeeded on old-k8s-version-799707: state=Stopped err=<nil>
	W0223 01:24:47.192746  764048 fix.go:128] unexpected machine state, will restart: <nil>
	I0223 01:24:47.194605  764048 out.go:177] * Restarting existing docker container for "old-k8s-version-799707" ...
	I0223 01:24:43.956785  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:45.957865  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:48.456454  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:45.509045  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:48.007627  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:47.195889  764048 cli_runner.go:164] Run: docker start old-k8s-version-799707
	I0223 01:24:47.452279  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.471747  764048 kic.go:430] container "old-k8s-version-799707" state is running.
	I0223 01:24:47.472285  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:47.489570  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.489761  764048 machine.go:88] provisioning docker machine ...
	I0223 01:24:47.489782  764048 ubuntu.go:169] provisioning hostname "old-k8s-version-799707"
	I0223 01:24:47.489818  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:47.506471  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:47.506715  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:47.506741  764048 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-799707 && echo "old-k8s-version-799707" | sudo tee /etc/hostname
	I0223 01:24:47.507401  764048 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40278->127.0.0.1:33414: read: connection reset by peer
	I0223 01:24:50.649171  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-799707
	
	I0223 01:24:50.649264  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.668220  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:50.668659  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:50.668690  764048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-799707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-799707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-799707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:24:50.798415  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:50.798446  764048 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:24:50.798504  764048 ubuntu.go:177] setting up certificates
	I0223 01:24:50.798521  764048 provision.go:83] configureAuth start
	I0223 01:24:50.798581  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:50.815373  764048 provision.go:138] copyHostCerts
	I0223 01:24:50.815447  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:24:50.815464  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:24:50.815542  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:24:50.815649  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:24:50.815662  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:24:50.815698  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:24:50.815828  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:24:50.815845  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:24:50.815883  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:24:50.815954  764048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-799707 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-799707]
	I0223 01:24:50.956162  764048 provision.go:172] copyRemoteCerts
	I0223 01:24:50.956237  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:24:50.956294  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.973887  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.066745  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:24:51.088783  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 01:24:51.114161  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:24:51.136302  764048 provision.go:86] duration metric: configureAuth took 337.765346ms
	I0223 01:24:51.136338  764048 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:24:51.136542  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:51.136603  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.153110  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.153343  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.153360  764048 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:24:51.282447  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:24:51.282475  764048 ubuntu.go:71] root file system type: overlay
	I0223 01:24:51.282624  764048 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:24:51.282692  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.300243  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.300450  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.300510  764048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:24:51.445956  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:24:51.446035  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.464137  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.464317  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.464339  764048 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:24:51.599209  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:51.599236  764048 machine.go:91] provisioned docker machine in 4.109460251s
	I0223 01:24:51.599249  764048 start.go:300] post-start starting for "old-k8s-version-799707" (driver="docker")
	I0223 01:24:51.599259  764048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:24:51.599311  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:24:51.599368  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.617077  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.714796  764048 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:24:51.717878  764048 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:24:51.717913  764048 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:24:51.717926  764048 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:24:51.717935  764048 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:24:51.717949  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:24:51.718015  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:24:51.718126  764048 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:24:51.718238  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:24:51.726135  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:51.747990  764048 start.go:303] post-start completed in 148.727396ms
	I0223 01:24:51.748091  764048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:24:51.748133  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.764872  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.854725  764048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:24:51.858894  764048 fix.go:56] fixHost completed within 4.682089908s
	I0223 01:24:51.858929  764048 start.go:83] releasing machines lock for "old-k8s-version-799707", held for 4.682151168s
	I0223 01:24:51.858987  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:51.875113  764048 ssh_runner.go:195] Run: cat /version.json
	I0223 01:24:51.875169  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.875222  764048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:24:51.875284  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.892186  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.892603  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.981915  764048 ssh_runner.go:195] Run: systemctl --version
	I0223 01:24:52.071583  764048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 01:24:52.076094  764048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 01:24:52.076150  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:24:52.084570  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:24:52.093490  764048 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 01:24:52.093526  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.093556  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.093683  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.109388  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 01:24:52.119408  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:24:52.128541  764048 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.128617  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:24:52.138147  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.148648  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:24:52.157740  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.166291  764048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:24:52.174294  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:24:52.182560  764048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:24:52.191707  764048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:24:52.199478  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.279573  764048 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:24:52.364794  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.364849  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.364907  764048 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:24:52.378283  764048 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:24:52.378357  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:24:52.390249  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.407123  764048 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:24:52.410703  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:24:52.419413  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:24:52.436969  764048 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:24:52.538363  764048 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:24:52.641674  764048 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.641801  764048 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:24:52.672699  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.752635  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:53.005432  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:53.028950  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:50.956327  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:52.956439  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:50.507501  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:52.508735  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:53.053932  764048 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0223 01:24:53.054034  764048 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:24:53.069369  764048 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0223 01:24:53.072991  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:53.082986  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:53.083031  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:53.101057  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:53.101079  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:53.101131  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:53.109330  764048 ssh_runner.go:195] Run: which lz4
	I0223 01:24:53.112468  764048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0223 01:24:53.115371  764048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 01:24:53.115398  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0223 01:24:53.900009  764048 docker.go:649] Took 0.787557 seconds to copy over tarball
	I0223 01:24:53.900101  764048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 01:24:55.917765  764048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.017627982s)
	I0223 01:24:55.917798  764048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 01:24:55.986783  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:55.995174  764048 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0223 01:24:56.012678  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:56.093644  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:55.456910  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:57.956346  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:55.008744  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:57.508081  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:58.619686  764048 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.525997554s)
	I0223 01:24:58.619778  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:58.638743  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:58.638772  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:58.638784  764048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 01:24:58.640360  764048 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0223 01:24:58.640468  764048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.640607  764048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.640677  764048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.640855  764048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.640978  764048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.641912  764048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.642118  764048 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.642279  764048 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0223 01:24:58.642467  764048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.642541  764048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.642661  764048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.642840  764048 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.643303  764048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.643387  764048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.643504  764048 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.801512  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0223 01:24:58.810449  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.822313  764048 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0223 01:24:58.822362  764048 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0223 01:24:58.822407  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0223 01:24:58.828135  764048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0223 01:24:58.828187  764048 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.828232  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.832726  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.841318  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0223 01:24:58.843598  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.845024  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.847494  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0223 01:24:58.863715  764048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0223 01:24:58.863770  764048 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863800  764048 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0223 01:24:58.863817  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863840  764048 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.863881  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.877108  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.881932  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0223 01:24:58.883024  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0223 01:24:58.887720  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.888992  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.896469  764048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0223 01:24:58.896520  764048 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.896568  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.909663  764048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0223 01:24:58.909718  764048 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909761  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909764  764048 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0223 01:24:58.909801  764048 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.909863  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.915957  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0223 01:24:58.930358  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0223 01:24:58.930531  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0223 01:24:58.930584  764048 cache_images.go:92] LoadImages completed in 291.787416ms
	W0223 01:24:58.930662  764048 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0223 01:24:58.930711  764048 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:24:59.002793  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:59.002825  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:59.002849  764048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:24:59.002873  764048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-799707 NodeName:old-k8s-version-799707 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 01:24:59.003021  764048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-799707"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-799707
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.94.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:24:59.003101  764048 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-799707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:24:59.003150  764048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 01:24:59.011882  764048 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:24:59.011955  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:24:59.020226  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 01:24:59.036352  764048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 01:24:59.052765  764048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 01:24:59.068716  764048 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:24:59.071794  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:59.081516  764048 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707 for IP: 192.168.94.2
	I0223 01:24:59.081554  764048 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.081720  764048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:24:59.081765  764048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:24:59.081865  764048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key
	I0223 01:24:59.081931  764048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a
	I0223 01:24:59.081989  764048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key
	I0223 01:24:59.082135  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:24:59.082182  764048 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:24:59.082205  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:24:59.082240  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:24:59.082275  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:24:59.082304  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:24:59.082383  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:59.083221  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:24:59.105664  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:24:59.127530  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:24:59.149110  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 01:24:59.171812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:24:59.194479  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:24:59.215613  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:24:59.236896  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:24:59.258380  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:24:59.280812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:24:59.303146  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:24:59.325675  764048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:24:59.342098  764048 ssh_runner.go:195] Run: openssl version
	I0223 01:24:59.347196  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:24:59.355998  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359380  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359434  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.366000  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:24:59.373883  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:24:59.383550  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386803  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386851  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.393159  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:24:59.401114  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:24:59.410493  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413720  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413769  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.419835  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:24:59.428503  764048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:24:59.431930  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0223 01:24:59.438516  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0223 01:24:59.444802  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0223 01:24:59.451032  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0223 01:24:59.457355  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0223 01:24:59.463364  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0223 01:24:59.469151  764048 kubeadm.go:404] StartCluster: {Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:59.469277  764048 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:24:59.486256  764048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:24:59.494482  764048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0223 01:24:59.494549  764048 kubeadm.go:636] restartCluster start
	I0223 01:24:59.494602  764048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 01:24:59.502465  764048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:24:59.503492  764048 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:59.504158  764048 kubeconfig.go:146] "old-k8s-version-799707" context is missing from /home/jenkins/minikube-integration/18233-317564/kubeconfig - will repair!
	I0223 01:24:59.505058  764048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.506938  764048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 01:24:59.515443  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:24:59.515508  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:24:59.525625  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.016225  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.016379  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.026296  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.515803  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.515913  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.526179  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.015710  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.015775  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.026278  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.515779  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.515870  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.526346  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.456513  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.956094  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:00.007279  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.008550  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:04.507894  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.016199  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.026597  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:02.516181  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.516275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.526556  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.016094  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.016199  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.026612  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.516213  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.516295  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.527347  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.015853  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.015934  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.025845  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.516436  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.516520  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.526628  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.016168  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.016238  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.026961  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.515470  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.515565  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.525559  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.016173  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.027029  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.515495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.515612  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.525687  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.456412  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.456833  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.007705  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:09.008286  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.015495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.015568  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.026678  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:07.516253  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.516337  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.526391  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.015899  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.015968  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.025911  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.516098  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.516167  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.526981  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.016463  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.016557  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.029165  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.516495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.516648  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.526971  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.527005  764048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0223 01:25:09.527018  764048 kubeadm.go:1135] stopping kube-system containers ...
	I0223 01:25:09.527081  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:25:09.546502  764048 docker.go:483] Stopping containers: [b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53]
	I0223 01:25:09.546580  764048 ssh_runner.go:195] Run: docker stop b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53
	I0223 01:25:09.563682  764048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 01:25:09.576338  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:25:09.584800  764048 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 23 01:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 23 01:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 23 01:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 23 01:19 /etc/kubernetes/scheduler.conf
	
	I0223 01:25:09.584871  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 01:25:09.593154  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 01:25:09.601622  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 01:25:09.610963  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 01:25:09.618978  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627191  764048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627226  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:09.680140  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.770745  764048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090560392s)
	I0223 01:25:10.770787  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.976122  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.038904  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.126325  764048 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:25:11.126417  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:11.626797  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:09.956223  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:11.957633  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:11.508301  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:14.007767  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:12.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:12.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.127338  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.127311  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.627274  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.126534  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.627263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.627307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.456395  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:16.456575  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:18.456739  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:16.507659  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:19.007262  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:17.127218  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:17.627134  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.127282  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.626855  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.127245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.627466  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.627329  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.127325  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.627266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.956537  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:22.956701  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:21.008120  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:23.508140  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:22.127189  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:22.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.126825  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.627188  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.126739  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.627267  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.127304  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.126891  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.626687  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.457141  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:27.956309  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:26.006787  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:28.007858  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:27.126498  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:27.626585  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.127312  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.627479  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.127263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.127252  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.627251  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.456654  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:32.956862  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:30.508479  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:33.008156  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:32.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:32.627298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.127260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.627313  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.126749  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.626911  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.127303  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.626713  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.127324  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.626786  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.455801  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.456877  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:35.507519  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.508156  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.126523  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:37.627410  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.127109  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.126994  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.626468  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.127319  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.626871  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.956295  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.456582  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:40.007466  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.007689  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:44.507564  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.127062  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:42.627285  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.127532  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.627370  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.127314  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.627262  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.127476  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.956710  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:46.957028  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:46.507904  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:49.007521  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:47.126569  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:47.627291  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.126638  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.627296  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.126978  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.127306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.626690  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.126800  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.627229  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.455814  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:51.456150  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:53.456683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:51.007905  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:53.507333  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:52.127250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:52.627255  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.127231  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.127330  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.627261  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.127327  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.627272  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.127307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.626853  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.456774  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:57.956428  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:55.508530  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:58.006786  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:57.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:57.627271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.127321  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.627294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.126813  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.627059  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.127271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.627113  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.127202  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.626495  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.956705  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:01.956833  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:00.007279  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:02.007622  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:04.007691  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:02.126951  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:02.627276  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.127241  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.627284  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.127323  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.626588  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.126876  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.627245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.127301  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.626519  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.456663  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:06.956019  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:06.007948  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:08.507404  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:07.127217  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:07.627286  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.126680  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.626774  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.127378  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.627060  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.126842  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.626792  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:11.126910  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:11.145763  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.145788  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:11.145831  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:11.165136  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.165170  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:11.165223  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:11.182783  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.182815  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:11.182870  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:11.200040  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.200505  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:11.200588  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:11.219336  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.219369  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:11.219481  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:11.236888  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.236916  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:11.236979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:11.255241  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.255276  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:11.255349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:11.273587  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.273613  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:11.273625  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:11.273645  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:11.301874  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:11.301911  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:11.367953  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:11.367981  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:11.367999  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:11.384915  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:11.384948  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:11.423686  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:11.423719  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:11.443811  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:50 old-k8s-version-799707 kubelet[1655]: E0223 01:25:50.226291    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.446025  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:51 old-k8s-version-799707 kubelet[1655]: E0223 01:25:51.225450    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.448865  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.225297    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.449155  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.468475  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.468779  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.472671  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.475027  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483450  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483476  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:11.483545  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:11.483556  764048 out.go:239]   Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.483563  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.483646  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.483662  764048 out.go:239]   Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.483695  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483707  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483716  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:08.956136  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:11.456613  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:11.007625  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:13.507430  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:13.956797  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:16.456765  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:16.007065  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:18.007931  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:21.485306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:21.496387  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:21.514732  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.514762  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:21.514826  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:21.532743  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.532769  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:21.532815  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:21.550131  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.550159  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:21.550217  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:21.567723  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.567752  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:21.567810  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:21.586824  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.586864  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:21.586931  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:21.605250  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.605278  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:21.605328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:21.623380  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.623417  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:21.623494  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:21.641554  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.641579  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:21.641593  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:21.641610  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:21.670812  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.671137  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.674833  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:21.677109  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.691648  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.695439  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.695961  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.697979  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.703403  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:21.703431  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:21.730898  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:21.730932  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:21.792948  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:21.792972  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:21.792988  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:21.810167  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:21.810200  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:21.847886  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.847911  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:21.847973  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:21.847988  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.847997  764048 out.go:239]   Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.848014  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.848024  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.848034  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.848046  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.848068  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:18.955888  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:20.956550  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:23.456416  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:20.508236  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:23.007057  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:25.457015  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:27.956587  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:25.007665  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:27.507691  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:28.007441  698728 pod_ready.go:81] duration metric: took 4m0.005882483s waiting for pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace to be "Ready" ...
	E0223 01:26:28.007462  698728 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0223 01:26:28.007470  698728 pod_ready.go:38] duration metric: took 4m1.599715489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:26:28.007495  698728 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:26:28.007565  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:28.025970  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:28.026043  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:28.043836  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:28.043912  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:28.060799  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:28.060875  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:28.079718  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:28.079798  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:28.097128  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:28.097206  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:28.115072  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:28.115157  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:28.133065  698728 logs.go:276] 0 containers: []
	W0223 01:26:28.133095  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:28.133154  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:28.151878  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:28.151971  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:28.169282  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:28.169321  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:28.169340  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:28.196325  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:28.196360  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:28.218355  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:28.218395  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:28.260721  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:28.260761  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:28.317909  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:28.317946  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:28.410906  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:28.410936  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:28.442190  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:28.442228  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:28.468887  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:28.468924  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:28.489618  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:28.489647  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:28.510600  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:28.510629  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:28.531980  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:28.532010  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:28.588173  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:28.588219  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:28.677392  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:28.677430  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:31.849099  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:31.860777  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:31.880217  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.880249  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:31.880321  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:31.900070  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.900104  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:31.900177  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:31.924832  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.924871  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:31.924926  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:31.943201  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.943233  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:31.943293  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:31.963632  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.963659  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:31.963718  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:31.981603  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.981631  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:31.981687  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:31.999354  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.999385  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:31.999443  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:29.957264  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:32.457147  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:31.208447  698728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:31.222642  698728 api_server.go:72] duration metric: took 4m7.146676414s to wait for apiserver process to appear ...
	I0223 01:26:31.222673  698728 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:26:31.222765  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:31.241520  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:31.241613  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:31.259085  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:31.259167  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:31.278635  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:31.278707  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:31.296938  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:31.297024  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:31.316657  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:31.316743  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:31.336028  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:31.336114  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:31.353603  698728 logs.go:276] 0 containers: []
	W0223 01:26:31.353639  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:31.353698  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:31.371682  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:31.371764  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:31.391011  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:31.391050  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:31.391065  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:31.446950  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:31.446982  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:31.475094  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:31.475138  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:31.569351  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:31.569386  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:31.600500  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:31.600534  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:31.627728  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:31.627757  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:31.649569  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:31.649604  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:31.692582  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:31.692620  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:31.791576  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:31.791616  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:31.812623  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:31.812657  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:31.837853  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:31.837882  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:31.860409  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:31.860446  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:31.882327  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:31.882360  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:34.458751  698728 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0223 01:26:34.464178  698728 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0223 01:26:34.465641  698728 api_server.go:141] control plane version: v1.28.4
	I0223 01:26:34.465668  698728 api_server.go:131] duration metric: took 3.242982721s to wait for apiserver health ...
	I0223 01:26:34.465677  698728 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:26:34.465741  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:34.487279  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:34.487353  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:34.506454  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:34.506534  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:34.526820  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:34.526900  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:34.548576  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:34.548656  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:34.569299  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:34.569387  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:34.589893  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:34.589967  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:34.611719  698728 logs.go:276] 0 containers: []
	W0223 01:26:34.611745  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:34.611815  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:34.632584  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:34.632673  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:34.651078  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:34.651122  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:34.651137  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:32.017697  764048 logs.go:276] 0 containers: []
	W0223 01:26:32.017726  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:32.017740  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:32.017757  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:32.045068  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.048789  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.049261  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:32.051257  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.064250  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.070945  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.073222  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.073688  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.075053  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:32.075076  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:32.101810  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:32.101851  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:32.162373  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:32.162404  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:32.162421  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:32.179945  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:32.179980  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:32.216971  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217002  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:32.217070  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:32.217085  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217101  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.217112  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217130  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.217144  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.217159  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217167  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:34.740877  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:34.740913  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:34.769168  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:34.769201  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:34.791050  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:34.791083  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:34.813591  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:34.813625  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:34.855060  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:34.855099  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:34.880436  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:34.880463  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:34.900248  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:34.900288  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:34.928856  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:34.928895  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:35.024451  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:35.024483  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:35.053946  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:35.053982  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:35.079503  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:35.079536  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:35.134351  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:35.134387  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:37.692865  698728 system_pods.go:59] 8 kube-system pods found
	I0223 01:26:37.692901  698728 system_pods.go:61] "coredns-5dd5756b68-p4fwd" [85a617ed-3344-4942-b1a0-765ff78a4925] Running
	I0223 01:26:37.692908  698728 system_pods.go:61] "etcd-embed-certs-039066" [e4638cee-d774-4316-879d-4d18434da56e] Running
	I0223 01:26:37.692913  698728 system_pods.go:61] "kube-apiserver-embed-certs-039066" [92d93d03-19b0-4ad6-854f-db215a4726fe] Running
	I0223 01:26:37.692918  698728 system_pods.go:61] "kube-controller-manager-embed-certs-039066" [2ef18956-2528-4f90-8d42-4d03fc02b3cc] Running
	I0223 01:26:37.692928  698728 system_pods.go:61] "kube-proxy-hmfbz" [f29b3a5e-06f8-484f-9f53-0a827c604e82] Running
	I0223 01:26:37.692933  698728 system_pods.go:61] "kube-scheduler-embed-certs-039066" [a89eac7f-c55a-4db6-8c33-a8eedf923225] Running
	I0223 01:26:37.692942  698728 system_pods.go:61] "metrics-server-57f55c9bc5-s48ls" [81101e57-c24a-4018-9994-f86d859d120b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:26:37.692948  698728 system_pods.go:61] "storage-provisioner" [1f190a7c-156a-46d4-884e-fe094b5d0ff5] Running
	I0223 01:26:37.692962  698728 system_pods.go:74] duration metric: took 3.227277265s to wait for pod list to return data ...
	I0223 01:26:37.692978  698728 default_sa.go:34] waiting for default service account to be created ...
	I0223 01:26:37.695595  698728 default_sa.go:45] found service account: "default"
	I0223 01:26:37.695622  698728 default_sa.go:55] duration metric: took 2.63602ms for default service account to be created ...
	I0223 01:26:37.695633  698728 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 01:26:37.700487  698728 system_pods.go:86] 8 kube-system pods found
	I0223 01:26:37.700514  698728 system_pods.go:89] "coredns-5dd5756b68-p4fwd" [85a617ed-3344-4942-b1a0-765ff78a4925] Running
	I0223 01:26:37.700520  698728 system_pods.go:89] "etcd-embed-certs-039066" [e4638cee-d774-4316-879d-4d18434da56e] Running
	I0223 01:26:37.700524  698728 system_pods.go:89] "kube-apiserver-embed-certs-039066" [92d93d03-19b0-4ad6-854f-db215a4726fe] Running
	I0223 01:26:37.700528  698728 system_pods.go:89] "kube-controller-manager-embed-certs-039066" [2ef18956-2528-4f90-8d42-4d03fc02b3cc] Running
	I0223 01:26:37.700532  698728 system_pods.go:89] "kube-proxy-hmfbz" [f29b3a5e-06f8-484f-9f53-0a827c604e82] Running
	I0223 01:26:37.700536  698728 system_pods.go:89] "kube-scheduler-embed-certs-039066" [a89eac7f-c55a-4db6-8c33-a8eedf923225] Running
	I0223 01:26:37.700542  698728 system_pods.go:89] "metrics-server-57f55c9bc5-s48ls" [81101e57-c24a-4018-9994-f86d859d120b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:26:37.700549  698728 system_pods.go:89] "storage-provisioner" [1f190a7c-156a-46d4-884e-fe094b5d0ff5] Running
	I0223 01:26:37.700557  698728 system_pods.go:126] duration metric: took 4.918ms to wait for k8s-apps to be running ...
	I0223 01:26:37.700564  698728 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 01:26:37.700614  698728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:26:37.712248  698728 system_svc.go:56] duration metric: took 11.67624ms WaitForService to wait for kubelet.
	I0223 01:26:37.712281  698728 kubeadm.go:581] duration metric: took 4m13.636322558s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 01:26:37.712309  698728 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:26:37.715299  698728 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:26:37.715322  698728 node_conditions.go:123] node cpu capacity is 8
	I0223 01:26:37.715337  698728 node_conditions.go:105] duration metric: took 3.021596ms to run NodePressure ...
	I0223 01:26:37.715351  698728 start.go:228] waiting for startup goroutines ...
	I0223 01:26:37.715360  698728 start.go:233] waiting for cluster config update ...
	I0223 01:26:37.715376  698728 start.go:242] writing updated cluster config ...
	I0223 01:26:37.715671  698728 ssh_runner.go:195] Run: rm -f paused
	I0223 01:26:37.764908  698728 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0223 01:26:37.766849  698728 out.go:177] * Done! kubectl is now configured to use "embed-certs-039066" cluster and "default" namespace by default
	I0223 01:26:34.957087  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:37.456374  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:39.456876  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:41.956264  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:42.219253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:42.229496  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:42.247555  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.247587  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:42.247642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:42.265205  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.265236  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:42.265284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:42.284632  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.284661  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:42.284719  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:42.302235  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.302263  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:42.302323  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:42.319683  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.319709  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:42.319767  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:42.338672  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.338696  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:42.338741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:42.356628  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.356654  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:42.356705  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:42.374290  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.374319  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:42.374334  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:42.374348  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:42.408608  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.415731  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.418148  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.418679  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.435726  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.437740  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:42.437760  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:42.465460  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:42.465489  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:42.524278  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:42.524299  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:42.524312  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:42.540348  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:42.540377  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:42.578403  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578438  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:42.578496  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:42.578507  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.578531  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.578546  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.578551  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.578559  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.578573  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578581  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:44.456381  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:46.456636  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:48.956089  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:50.956816  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:53.456420  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:52.580305  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:52.590732  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:52.607693  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.607725  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:52.607771  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:52.624842  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.624873  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:52.624922  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:52.642827  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.642852  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:52.642899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:52.660436  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.660462  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:52.660517  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:52.677507  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.677544  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:52.677610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:52.694555  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.694587  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:52.694642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:52.712215  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.712248  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:52.712299  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:52.729809  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.729833  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:52.729844  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:52.729857  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:52.748858  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:52.751124  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.752064  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.769290  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.772963  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.775019  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.777255  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.788895  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:52.788921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:52.815781  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:52.815820  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:52.875541  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:52.875571  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:52.875587  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:52.897948  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:52.897975  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:52.932891  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.932917  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:52.933044  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:52.933066  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933075  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.933087  764048 out.go:239]   Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.933099  764048 out.go:239]   Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933108  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.933117  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.933127  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:55.956390  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:57.951411  747181 pod_ready.go:81] duration metric: took 4m0.00105371s waiting for pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace to be "Ready" ...
	E0223 01:26:57.951437  747181 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0223 01:26:57.951458  747181 pod_ready.go:38] duration metric: took 4m14.536189021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:26:57.951490  747181 kubeadm.go:640] restartCluster took 4m31.50180753s
	W0223 01:26:57.951564  747181 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0223 01:26:57.951596  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0223 01:27:04.486872  747181 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.535251417s)
	I0223 01:27:04.486936  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:27:04.497746  747181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:27:04.506004  747181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:27:04.506090  747181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:27:04.513948  747181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:27:04.513996  747181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:27:04.554467  747181 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0223 01:27:04.554541  747181 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:27:04.602705  747181 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:27:04.602819  747181 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:27:04.602903  747181 kubeadm.go:322] OS: Linux
	I0223 01:27:04.602969  747181 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:27:04.603052  747181 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:27:04.603098  747181 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:27:04.603140  747181 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:27:04.603216  747181 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:27:04.603299  747181 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:27:04.603388  747181 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0223 01:27:04.603465  747181 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0223 01:27:04.603522  747181 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0223 01:27:04.665758  747181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:27:04.665921  747181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:27:04.666112  747181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:27:04.932934  747181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:27:04.937774  747181 out.go:204]   - Generating certificates and keys ...
	I0223 01:27:04.937861  747181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:27:04.937928  747181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:27:04.937991  747181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:27:04.938057  747181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:27:04.938125  747181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:27:04.938196  747181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:27:04.938277  747181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:27:04.938382  747181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:27:04.938450  747181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:27:04.938515  747181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:27:04.938550  747181 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:27:04.938595  747181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:27:05.076940  747181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:27:05.229217  747181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:27:05.279726  747181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:27:05.475432  747181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:27:05.475893  747181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:27:05.478193  747181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:27:02.934253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:02.945035  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:02.964813  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.964846  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:02.964914  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:02.985554  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.985586  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:02.985643  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:03.003541  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.003573  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:03.003636  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:03.023214  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.023240  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:03.023296  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:03.043054  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.043085  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:03.043148  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:03.061854  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.061886  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:03.061941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:03.081342  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.081374  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:03.081428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:03.100486  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.100514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:03.100528  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:03.100545  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:03.121342  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.125184  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:03.127641  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.130747  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.145918  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.152913  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.153303  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.157613  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.166434  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:03.166466  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:03.196885  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:03.196921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:03.265084  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:03.265110  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:03.265124  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:03.282530  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:03.282564  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:03.321418  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321443  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:03.321514  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:03.321527  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321540  764048 out.go:239]   Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.321554  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.321563  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321573  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.321582  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321593  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:05.480270  747181 out.go:204]   - Booting up control plane ...
	I0223 01:27:05.480397  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:27:05.480508  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:27:05.480602  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:27:05.492771  747181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:27:05.493384  747181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:27:05.493454  747181 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 01:27:05.575125  747181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:27:11.076961  747181 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501969 seconds
	I0223 01:27:11.077130  747181 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 01:27:11.089694  747181 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 01:27:11.608404  747181 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 01:27:11.608599  747181 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-643873 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 01:27:12.117387  747181 kubeadm.go:322] [bootstrap-token] Using token: euudkt.s0v7jwca9pwpsihr
	I0223 01:27:12.119012  747181 out.go:204]   - Configuring RBAC rules ...
	I0223 01:27:12.119158  747181 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 01:27:12.122902  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 01:27:12.130891  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 01:27:12.133452  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 01:27:12.136237  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 01:27:12.139622  747181 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 01:27:12.149299  747181 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 01:27:12.353394  747181 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 01:27:12.576330  747181 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 01:27:12.577609  747181 kubeadm.go:322] 
	I0223 01:27:12.577688  747181 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 01:27:12.577694  747181 kubeadm.go:322] 
	I0223 01:27:12.577755  747181 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 01:27:12.577759  747181 kubeadm.go:322] 
	I0223 01:27:12.577779  747181 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 01:27:12.577826  747181 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 01:27:12.577867  747181 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 01:27:12.577871  747181 kubeadm.go:322] 
	I0223 01:27:12.577913  747181 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 01:27:12.577917  747181 kubeadm.go:322] 
	I0223 01:27:12.577964  747181 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 01:27:12.577969  747181 kubeadm.go:322] 
	I0223 01:27:12.578016  747181 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 01:27:12.578129  747181 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 01:27:12.578222  747181 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 01:27:12.578234  747181 kubeadm.go:322] 
	I0223 01:27:12.578348  747181 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 01:27:12.578455  747181 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 01:27:12.578464  747181 kubeadm.go:322] 
	I0223 01:27:12.578602  747181 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token euudkt.s0v7jwca9pwpsihr \
	I0223 01:27:12.578759  747181 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcbf83b93e1e99c3b9e337c3de6f53b35429b7347eaa8c3731469bde2d109270 \
	I0223 01:27:12.578791  747181 kubeadm.go:322] 	--control-plane 
	I0223 01:27:12.578802  747181 kubeadm.go:322] 
	I0223 01:27:12.578946  747181 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 01:27:12.578967  747181 kubeadm.go:322] 
	I0223 01:27:12.579076  747181 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token euudkt.s0v7jwca9pwpsihr \
	I0223 01:27:12.579208  747181 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcbf83b93e1e99c3b9e337c3de6f53b35429b7347eaa8c3731469bde2d109270 
	I0223 01:27:12.583047  747181 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:27:12.583199  747181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:27:12.583226  747181 cni.go:84] Creating CNI manager for ""
	I0223 01:27:12.583245  747181 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 01:27:12.585331  747181 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 01:27:12.586731  747181 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 01:27:12.597524  747181 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 01:27:12.616819  747181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 01:27:12.616909  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:12.616935  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=60a1754c54128d325d930960488a4adf9d1d6f25 minikube.k8s.io/name=default-k8s-diff-port-643873 minikube.k8s.io/updated_at=2024_02_23T01_27_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:12.890943  747181 ops.go:34] apiserver oom_adj: -16
	I0223 01:27:12.891096  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:13.391095  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:13.323129  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:13.333740  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:13.351749  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.351777  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:13.351843  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:13.369194  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.369219  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:13.369271  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:13.386603  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.386629  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:13.386698  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:13.404358  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.404389  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:13.404450  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:13.422585  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.422613  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:13.422674  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:13.440278  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.440309  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:13.440358  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:13.459814  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.459846  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:13.459901  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:13.477486  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.477514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:13.477529  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:13.477546  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:13.502463  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:13.502498  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:13.567760  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:13.567784  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:13.567802  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:13.586261  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:13.586292  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:13.630660  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:13.630698  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:13.653846  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.660373  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.660749  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.664562  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.679481  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.680005  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.683875  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.689235  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691661  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691680  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:13.691742  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:13.691759  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.691770  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.691778  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.691784  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.691792  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691801  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691811  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:13.892092  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:14.392134  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:14.892180  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:15.391863  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:15.891413  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:16.391237  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:16.891344  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:17.392063  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:17.891863  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:18.391893  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:18.891539  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:19.391305  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:19.892120  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:20.391562  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:20.891956  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:21.391425  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:21.892176  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:22.391963  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:22.892204  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:23.391144  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:23.891307  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.392144  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.891266  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.972265  747181 kubeadm.go:1088] duration metric: took 12.355415474s to wait for elevateKubeSystemPrivileges.
	I0223 01:27:24.972304  747181 kubeadm.go:406] StartCluster complete in 4m58.548482532s
	I0223 01:27:24.972331  747181 settings.go:142] acquiring lock: {Name:mkdd07176a1016ae9ca7d71258b6199ead689cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:27:24.972428  747181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:27:24.973242  747181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:27:24.973480  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 01:27:24.973508  747181 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0223 01:27:24.973614  747181 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973633  747181 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973642  747181 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973650  747181 addons.go:243] addon storage-provisioner should already be in state true
	I0223 01:27:24.973662  747181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643873"
	I0223 01:27:24.973691  747181 config.go:182] Loaded profile config "default-k8s-diff-port-643873": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 01:27:24.973711  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.973738  747181 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973752  747181 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973759  747181 addons.go:243] addon dashboard should already be in state true
	I0223 01:27:24.973813  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.973906  747181 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973929  747181 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973939  747181 addons.go:243] addon metrics-server should already be in state true
	I0223 01:27:24.973976  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.974030  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974241  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974303  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974408  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.997976  747181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:27:24.999539  747181 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0223 01:27:24.999494  747181 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:27:25.002524  747181 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0223 01:27:25.001096  747181 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0223 01:27:25.001121  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 01:27:25.002156  747181 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-643873"
	W0223 01:27:25.003795  747181 addons.go:243] addon default-storageclass should already be in state true
	I0223 01:27:25.005144  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0223 01:27:25.003834  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:25.003861  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.003882  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0223 01:27:25.005351  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0223 01:27:25.005166  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0223 01:27:25.005412  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.005443  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.005657  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:25.026979  747181 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 01:27:25.027006  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 01:27:25.027074  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.027623  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.027765  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.027983  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.050392  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.093241  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 01:27:25.194001  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:27:25.195219  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 01:27:25.195408  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0223 01:27:25.195425  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0223 01:27:25.197566  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0223 01:27:25.197583  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0223 01:27:25.372510  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0223 01:27:25.372546  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0223 01:27:25.382966  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0223 01:27:25.382994  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0223 01:27:25.485281  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 01:27:25.485313  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0223 01:27:25.487050  747181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643873" context rescaled to 1 replicas
	I0223 01:27:25.487149  747181 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 01:27:25.489098  747181 out.go:177] * Verifying Kubernetes components...
	I0223 01:27:23.692473  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:23.703266  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:23.722231  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.722260  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:23.722328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:23.740592  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.740625  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:23.740691  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:23.759630  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.759655  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:23.759701  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:23.777152  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.777182  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:23.777252  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:23.794715  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.794746  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:23.794812  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:23.812469  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.812494  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:23.812554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:23.830330  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.830357  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:23.830409  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:23.847767  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.847791  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:23.847802  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:23.847813  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:23.864330  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:23.864362  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:23.900552  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:23.900582  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:23.935656  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.936227  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.940498  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:23.946760  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:23.957938  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.965312  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.967639  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:23.968659  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:23.968676  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:23.995207  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:23.995243  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:24.054134  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:24.054163  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054186  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:24.054242  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:24.054257  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:24.054269  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:24.054280  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:24.054294  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:24.054309  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:24.054321  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054329  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:25.490732  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:27:25.572365  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0223 01:27:25.572399  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0223 01:27:25.771603  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0223 01:27:25.771636  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0223 01:27:25.777396  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 01:27:25.795849  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0223 01:27:25.795880  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0223 01:27:25.882916  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0223 01:27:25.882942  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0223 01:27:25.903798  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0223 01:27:25.903832  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0223 01:27:25.986660  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0223 01:27:25.986692  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0223 01:27:26.007706  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 01:27:26.007738  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0223 01:27:26.088546  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 01:27:27.291666  747181 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.198371947s)
	I0223 01:27:27.291727  747181 start.go:929] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0223 01:27:27.672404  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478357421s)
	I0223 01:27:27.672499  747181 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.181704611s)
	I0223 01:27:27.672663  747181 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643873" to be "Ready" ...
	I0223 01:27:27.672491  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.47723657s)
	I0223 01:27:27.677947  747181 node_ready.go:49] node "default-k8s-diff-port-643873" has status "Ready":"True"
	I0223 01:27:27.677980  747181 node_ready.go:38] duration metric: took 5.279557ms waiting for node "default-k8s-diff-port-643873" to be "Ready" ...
	I0223 01:27:27.678035  747181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:27:27.687553  747181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.783675  747181 pod_ready.go:92] pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.783777  747181 pod_ready.go:81] duration metric: took 96.184241ms waiting for pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.783802  747181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.794877  747181 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.794906  747181 pod_ready.go:81] duration metric: took 11.086164ms waiting for pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.794920  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.872561  747181 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.872594  747181 pod_ready.go:81] duration metric: took 77.664042ms waiting for pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.872612  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.879684  747181 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.879712  747181 pod_ready.go:81] duration metric: took 7.090402ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.879725  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rpb8" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.910434  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132982172s)
	I0223 01:27:27.910484  747181 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-643873"
	I0223 01:27:28.077337  747181 pod_ready.go:92] pod "kube-proxy-2rpb8" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:28.077366  747181 pod_ready.go:81] duration metric: took 197.632572ms waiting for pod "kube-proxy-2rpb8" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.077383  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.478162  747181 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:28.478191  747181 pod_ready.go:81] duration metric: took 400.797707ms waiting for pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.478218  747181 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.621653  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.533051258s)
	I0223 01:27:28.623454  747181 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-643873 addons enable metrics-server
	
	I0223 01:27:28.624862  747181 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0223 01:27:28.626357  747181 addons.go:505] enable addons completed in 3.652850638s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0223 01:27:30.485483  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:32.986138  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:34.056179  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:34.068644  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:34.091576  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.091606  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:34.091662  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:34.112999  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.113029  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:34.113083  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:34.135911  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.135948  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:34.136009  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:34.155552  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.155584  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:34.155639  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:34.172644  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.172674  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:34.172731  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:34.193231  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.193261  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:34.193318  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:34.213564  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.213587  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:34.213632  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:34.234247  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.234274  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:34.234288  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:34.234304  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:34.254068  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:34.254102  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:34.294146  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:34.294180  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:34.318533  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.329296  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.339920  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.343536  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.350850  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.358682  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.367366  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:34.367396  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:34.403850  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:34.403915  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:34.479101  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:34.479131  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479144  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:34.479211  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:34.479227  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.479247  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.479266  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.479275  764048 out.go:239]   Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.479284  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.479292  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479304  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:35.485479  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:37.983973  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:39.984078  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:41.984604  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:44.481194  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:44.492741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:44.510893  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.510919  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:44.510979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:44.528074  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.528099  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:44.528147  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:44.545615  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.545650  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:44.545711  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:44.562131  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.562157  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:44.562216  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:44.579943  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.579968  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:44.580032  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:44.597379  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.597405  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:44.597469  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:44.614583  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.614645  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:44.614736  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:44.632117  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.632153  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:44.632167  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:44.632182  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:44.649949  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.652196  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.657147  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.664845  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.673447  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.677423  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.682830  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.686738  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.690877  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:44.690909  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:44.719106  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:44.719147  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:44.778079  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:44.778107  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:44.778126  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:44.794656  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:44.794686  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:44.831247  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831275  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:44.831339  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:44.831351  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.831360  764048 out.go:239]   Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.831371  764048 out.go:239]   Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.831379  764048 out.go:239]   Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.831390  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.831397  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831405  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:44.484397  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:46.983766  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:48.984339  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:50.984548  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:53.485101  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:54.832552  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:54.843379  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:54.861974  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.862004  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:54.862082  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:54.880013  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.880054  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:54.880110  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:54.896746  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.896776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:54.896846  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:54.913796  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.913826  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:54.913899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:54.931897  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.931928  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:54.931988  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:54.949435  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.949468  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:54.949534  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:54.966362  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.966386  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:54.966431  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:54.983954  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.983982  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:54.983995  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:54.984011  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:54.999879  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:54.999907  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:55.037126  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:55.037156  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:55.059470  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.063298  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.068690  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:55.072516  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.081122  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.090028  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.092291  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.092793  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.095603  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:55.095626  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:55.123414  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:55.123451  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:55.179936  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:55.179960  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.179971  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:55.180020  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:55.180032  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180039  764048 out.go:239]   Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.180072  764048 out.go:239]   Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.180086  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180105  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.180114  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.180124  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:55.984245  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:57.984659  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:00.483779  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:02.484843  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:05.181993  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:05.192424  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:05.210121  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.210156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:05.210200  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:05.228650  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.228675  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:05.228723  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:05.245884  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.245913  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:05.245979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:05.262993  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.263028  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:05.263088  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:05.280340  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.280371  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:05.280435  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:05.297947  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.297970  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:05.298018  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:05.315334  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.315366  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:05.315425  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:05.332647  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.332671  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:05.332681  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:05.332694  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:05.348614  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:05.348642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:05.384048  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:05.384079  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:05.402702  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.411066  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.419595  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.421739  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.422302  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.430697  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.440486  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.442750  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.443073  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:05.443095  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:05.468968  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:05.469004  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:05.527294  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:05.527344  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527358  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:05.527423  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:05.527440  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.527456  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.527471  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.527486  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.527501  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.527515  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527523  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:04.983939  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:06.984331  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:08.984401  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:11.484559  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:15.528852  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:15.540245  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:15.557540  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.557566  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:15.557615  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:15.573753  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.573777  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:15.573835  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:15.590472  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.590500  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:15.590554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:15.608537  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.608568  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:15.608647  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:15.624845  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.624875  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:15.624930  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:15.641988  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.642016  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:15.642095  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:15.660022  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.660052  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:15.660102  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:15.677241  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.677266  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:15.677277  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:15.677291  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:15.703651  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:15.703682  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:15.762510  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:15.762531  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:15.762544  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:15.778772  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:15.778803  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:15.815612  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:15.815642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:15.834932  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.835453  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.844214  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.854157  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.856473  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.861781  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.870466  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874488  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874509  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:15.874577  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:15.874592  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.874601  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.874613  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.874627  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.874638  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874649  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874660  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:13.986081  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:16.484482  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:18.988090  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:21.484581  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:25.876148  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:25.886833  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:25.903865  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.903895  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:25.903941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:25.921203  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.921229  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:25.921272  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:25.938748  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.938776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:25.938825  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:25.956769  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.956792  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:25.956845  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:25.973495  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.973518  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:25.973561  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:25.992272  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.992298  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:25.992349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:26.010007  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.010030  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:26.010111  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:26.027042  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.027073  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:26.027087  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:26.027103  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:26.083781  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:26.083807  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:26.083824  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:26.099963  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:26.099992  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:26.137069  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:26.137100  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:26.157617  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.159983  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.165342  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.174225  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.179523  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.185204  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.192290  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.197251  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:26.197274  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:26.222726  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222752  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:26.222806  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:26.222818  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.222824  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.222834  764048 out.go:239]   Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.222842  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.222853  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.222864  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222870  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:23.984339  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:25.984436  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:28.484517  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:30.984871  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:33.483929  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:36.224294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:36.234593  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:36.252123  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.252147  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:36.252201  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:36.270152  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.270181  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:36.270234  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:36.286776  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.286803  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:36.286857  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:36.303407  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.303443  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:36.303500  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:36.320332  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.320360  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:36.320402  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:36.337290  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.337318  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:36.337367  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:36.356032  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.356056  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:36.356109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:36.372883  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.372909  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:36.372919  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:36.372931  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:36.388787  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:36.388825  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:36.424874  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:36.424910  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:36.445848  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.451297  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.456927  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.463893  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.471013  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.477862  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.485415  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.488865  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:36.488888  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:36.516057  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:36.516089  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:36.573623  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:36.573645  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573658  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:36.573725  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:36.573738  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.573747  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.573757  764048 out.go:239]   Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.573771  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.573783  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.573794  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573807  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:35.484648  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:37.984373  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:39.984656  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:42.484920  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:46.575225  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:46.585661  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:46.602730  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.602756  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:46.602806  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:46.620030  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.620061  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:46.620109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:46.637449  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.637478  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:46.637529  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:46.655302  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.655353  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:46.655405  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:46.672835  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.672859  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:46.672906  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:46.689042  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.689074  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:46.689128  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:46.705921  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.705949  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:46.706010  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:46.722399  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.722429  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:46.722442  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:46.722459  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:46.778773  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:46.778800  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:46.778815  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:46.794759  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:46.794791  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:46.831175  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:46.831207  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:46.858565  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.865386  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.871324  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.878984  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.881096  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.893561  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.894713  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:46.894737  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:46.920290  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920317  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:46.920373  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:46.920384  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.920391  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.920401  764048 out.go:239]   Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.920409  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.920418  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.920424  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920432  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:44.984374  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:46.984544  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:49.484549  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:51.984309  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:56.921234  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:56.932263  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:56.950133  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.950165  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:56.950211  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:56.967513  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.967544  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:56.967610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:56.985114  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.985135  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:56.985190  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:57.001619  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.001645  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:57.001690  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:54.484395  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:56.484684  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:57.019356  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.019381  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:57.019428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:57.036683  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.036711  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:57.036776  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:57.053460  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.053489  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:57.053536  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:57.070212  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.070240  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:57.070253  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:57.070270  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:57.096008  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:57.096044  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:57.153794  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:57.153817  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:57.153833  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:57.170295  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:57.170328  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:57.205650  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:57.205677  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:57.227302  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.234866  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.236884  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.248965  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.254557  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.254822  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.263128  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.267869  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.267897  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:57.267963  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:57.267977  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.267989  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.267998  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.268008  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.268018  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.268026  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.268031  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:58.984324  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:00.984714  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:03.484718  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:05.984464  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:08.484534  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:07.269999  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:07.280827  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:29:07.297977  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.298005  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:29:07.298075  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:29:07.315186  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.315222  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:29:07.315276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:29:07.332204  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.332234  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:29:07.332284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:29:07.349378  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.349407  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:29:07.349461  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:29:07.366248  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.366275  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:29:07.366340  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:29:07.384205  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.384229  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:29:07.384287  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:29:07.402600  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.402625  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:29:07.402678  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:29:07.420951  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.420984  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:29:07.421000  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:29:07.421022  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:29:07.446613  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:29:07.446648  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:29:07.505820  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:29:07.505841  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:29:07.505859  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:29:07.521736  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:29:07.521819  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:29:07.559319  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:29:07.559353  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:29:07.583248  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.588793  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.589050  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.597224  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.605348  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.610814  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.612796  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619406  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619427  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:29:07.619490  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:29:07.619501  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.619510  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.619519  764048 out.go:239]   Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.619526  764048 out.go:239]   Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.619535  764048 out.go:239]   Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619540  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619547  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:29:10.485157  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:12.983098  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:14.985538  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:16.986317  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:17.620865  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:17.631202  764048 kubeadm.go:640] restartCluster took 4m18.136634178s
	W0223 01:29:17.631285  764048 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0223 01:29:17.631316  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:29:18.369723  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:29:18.380597  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:29:18.389648  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:29:18.389701  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:29:18.397500  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:29:18.397542  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:29:18.444581  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:29:18.444639  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:29:18.612172  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:29:18.612306  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:29:18.612397  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:29:18.612453  764048 kubeadm.go:322] OS: Linux
	I0223 01:29:18.612523  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:29:18.612593  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:29:18.612684  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:29:18.612758  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:29:18.612840  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:29:18.612911  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:29:18.685576  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:29:18.685704  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:29:18.685805  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:29:18.862281  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:29:18.863574  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:29:18.870417  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:29:18.940701  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:29:18.943092  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:29:18.943199  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:29:18.943290  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:29:18.943424  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:29:18.943551  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:29:18.943651  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:29:18.943746  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:29:18.943837  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:29:18.943942  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:29:18.944060  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:29:18.944168  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:29:18.944239  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:29:18.944323  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:29:19.128104  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:29:19.237894  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:29:19.392875  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:29:19.789723  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:29:19.790432  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:29:19.792764  764048 out.go:204]   - Booting up control plane ...
	I0223 01:29:19.792883  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:29:19.795900  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:29:19.796833  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:29:19.797487  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:29:19.801650  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:29:19.485472  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:21.984198  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:24.484136  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:26.983917  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:28.984372  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:30.984461  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:33.484393  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:35.984903  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:38.484472  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:40.984280  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:42.984351  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:44.984392  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:47.483908  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:49.484381  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:51.484601  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:53.983584  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:55.983823  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:57.984149  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:59.801941  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:30:00.484843  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:02.985193  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:05.486226  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:07.984524  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:10.484401  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:12.984552  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:15.484478  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:17.984565  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:19.984601  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:22.484162  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:24.984814  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:27.484004  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:29.484683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:31.484863  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:33.983622  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:35.984247  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:38.484031  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:40.484654  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:42.984693  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:45.484942  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:47.984073  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:49.984624  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:51.984683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:54.484210  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:56.484709  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:58.484771  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:00.984626  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:03.484286  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:05.484917  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:07.984315  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:09.984491  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:12.484403  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:14.983684  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:16.984247  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:18.984493  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:21.484329  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:23.983852  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:25.984060  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:28.484334  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:28.484361  747181 pod_ready.go:81] duration metric: took 4m0.006134852s waiting for pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace to be "Ready" ...
	E0223 01:31:28.484372  747181 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0223 01:31:28.484380  747181 pod_ready.go:38] duration metric: took 4m0.806294848s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:31:28.484405  747181 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:31:28.484502  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:28.504509  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:28.504590  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:28.522143  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:28.522211  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:28.540493  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:28.540571  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:28.558804  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:28.558898  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:28.577087  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:28.577165  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:28.594722  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:28.594810  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:28.612317  747181 logs.go:276] 0 containers: []
	W0223 01:31:28.612349  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:28.612410  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:28.630536  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:28.630608  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:28.648473  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:28.648517  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:28.648531  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:28.674785  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:28.674816  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:28.713783  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:28.713815  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:28.767625  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:28.767659  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:28.855755  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:28.855794  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:28.881896  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:28.881929  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:28.910634  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:28.910668  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:28.934870  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:28.934903  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:28.955863  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:28.955890  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:29.015157  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:29.015197  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:29.105733  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:29.105760  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:29.125505  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:29.125532  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:29.146014  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:29.146043  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:31.667826  747181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:31:31.680599  747181 api_server.go:72] duration metric: took 4m6.193372853s to wait for apiserver process to appear ...
	I0223 01:31:31.680639  747181 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:31:31.680711  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:31.698617  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:31.698755  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:31.716225  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:31.716303  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:31.734194  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:31.734276  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:31.751527  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:31.751610  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:31.769553  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:31.769623  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:31.787456  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:31.787559  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:31.805210  747181 logs.go:276] 0 containers: []
	W0223 01:31:31.805236  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:31.805285  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:31.823185  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:31.823269  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:31.841289  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:31.841331  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:31.841349  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:31.933112  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:31.933146  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:31.964592  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:31.964630  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:31.986244  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:31.986279  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:32.026243  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:32.026283  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:32.047323  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:32.047357  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:32.102300  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:32.102343  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:32.157201  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:32.157237  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:32.248025  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:32.248082  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:32.274002  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:32.274033  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:32.294363  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:32.294396  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:32.319982  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:32.320015  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:32.340494  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:32.340523  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:34.869040  747181 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0223 01:31:34.873062  747181 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0223 01:31:34.874138  747181 api_server.go:141] control plane version: v1.28.4
	I0223 01:31:34.874163  747181 api_server.go:131] duration metric: took 3.193515753s to wait for apiserver health ...
	I0223 01:31:34.874174  747181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:31:34.874242  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:34.892077  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:34.892136  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:34.911804  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:34.911916  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:34.929552  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:34.929639  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:34.948285  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:34.948397  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:34.965670  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:34.965764  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:34.983712  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:34.983786  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:35.000413  747181 logs.go:276] 0 containers: []
	W0223 01:31:35.000441  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:35.000497  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:35.018154  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:35.018220  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:35.035590  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:35.035631  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:35.035647  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:35.060772  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:35.060804  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:35.080420  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:35.080455  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:35.106810  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:35.106841  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:35.126949  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:35.126977  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:35.168862  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:35.168899  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:35.224570  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:35.224618  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:35.317591  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:35.317642  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:35.348873  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:35.348921  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:35.373283  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:35.373312  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:35.392844  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:35.392882  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:35.414101  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:35.414134  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:35.467188  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:35.467221  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:38.067133  747181 system_pods.go:59] 8 kube-system pods found
	I0223 01:31:38.067159  747181 system_pods.go:61] "coredns-5dd5756b68-58f8r" [4654ded8-e843-40c2-a043-51af70a0c073] Running
	I0223 01:31:38.067166  747181 system_pods.go:61] "etcd-default-k8s-diff-port-643873" [03e8b1b0-a66a-4001-9ba8-50a81823592e] Running
	I0223 01:31:38.067169  747181 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643873" [c7c0bdbb-d372-4753-92cc-f24fe3f7dcb7] Running
	I0223 01:31:38.067173  747181 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643873" [50e983b6-a2cd-4fb4-a23a-2ebb91a37b73] Running
	I0223 01:31:38.067176  747181 system_pods.go:61] "kube-proxy-2rpb8" [dcc39424-df06-4bf0-b617-7f1e34633991] Running
	I0223 01:31:38.067180  747181 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643873" [5b74d719-d554-4cba-bf75-72c5fd1b6b9f] Running
	I0223 01:31:38.067186  747181 system_pods.go:61] "metrics-server-57f55c9bc5-54cdb" [8e42f000-1c93-462c-966c-ce0f162cac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:31:38.067191  747181 system_pods.go:61] "storage-provisioner" [6d6131ed-db27-4bdd-8645-38ef42ddb1a8] Running
	I0223 01:31:38.067199  747181 system_pods.go:74] duration metric: took 3.193019209s to wait for pod list to return data ...
	I0223 01:31:38.067209  747181 default_sa.go:34] waiting for default service account to be created ...
	I0223 01:31:38.069384  747181 default_sa.go:45] found service account: "default"
	I0223 01:31:38.069405  747181 default_sa.go:55] duration metric: took 2.18944ms for default service account to be created ...
	I0223 01:31:38.069413  747181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 01:31:38.073877  747181 system_pods.go:86] 8 kube-system pods found
	I0223 01:31:38.073898  747181 system_pods.go:89] "coredns-5dd5756b68-58f8r" [4654ded8-e843-40c2-a043-51af70a0c073] Running
	I0223 01:31:38.073904  747181 system_pods.go:89] "etcd-default-k8s-diff-port-643873" [03e8b1b0-a66a-4001-9ba8-50a81823592e] Running
	I0223 01:31:38.073908  747181 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643873" [c7c0bdbb-d372-4753-92cc-f24fe3f7dcb7] Running
	I0223 01:31:38.073915  747181 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643873" [50e983b6-a2cd-4fb4-a23a-2ebb91a37b73] Running
	I0223 01:31:38.073919  747181 system_pods.go:89] "kube-proxy-2rpb8" [dcc39424-df06-4bf0-b617-7f1e34633991] Running
	I0223 01:31:38.073923  747181 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643873" [5b74d719-d554-4cba-bf75-72c5fd1b6b9f] Running
	I0223 01:31:38.073932  747181 system_pods.go:89] "metrics-server-57f55c9bc5-54cdb" [8e42f000-1c93-462c-966c-ce0f162cac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:31:38.073943  747181 system_pods.go:89] "storage-provisioner" [6d6131ed-db27-4bdd-8645-38ef42ddb1a8] Running
	I0223 01:31:38.073956  747181 system_pods.go:126] duration metric: took 4.534328ms to wait for k8s-apps to be running ...
	I0223 01:31:38.073969  747181 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 01:31:38.074020  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:31:38.085228  747181 system_svc.go:56] duration metric: took 11.252838ms WaitForService to wait for kubelet.
	I0223 01:31:38.085252  747181 kubeadm.go:581] duration metric: took 4m12.59802964s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 01:31:38.085280  747181 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:31:38.087554  747181 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:31:38.087572  747181 node_conditions.go:123] node cpu capacity is 8
	I0223 01:31:38.087583  747181 node_conditions.go:105] duration metric: took 2.293685ms to run NodePressure ...
	I0223 01:31:38.087594  747181 start.go:228] waiting for startup goroutines ...
	I0223 01:31:38.087605  747181 start.go:233] waiting for cluster config update ...
	I0223 01:31:38.087620  747181 start.go:242] writing updated cluster config ...
	I0223 01:31:38.087918  747181 ssh_runner.go:195] Run: rm -f paused
	I0223 01:31:38.136302  747181 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0223 01:31:38.139226  747181 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643873" cluster and "default" namespace by default
	I0223 01:33:19.803128  764048 kubeadm.go:322] 
	I0223 01:33:19.803259  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:33:19.803344  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:33:19.803356  764048 kubeadm.go:322] 
	I0223 01:33:19.803405  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:33:19.803459  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:33:19.803603  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:33:19.803628  764048 kubeadm.go:322] 
	I0223 01:33:19.803738  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:33:19.803768  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:33:19.803850  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:33:19.803871  764048 kubeadm.go:322] 
	I0223 01:33:19.803995  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:33:19.804094  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:33:19.804166  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:33:19.804208  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:33:19.804275  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:33:19.804316  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:33:19.807097  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:19.807290  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:19.807529  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:19.807675  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:33:19.807772  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:33:19.807870  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 01:33:19.808072  764048 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 01:33:19.808143  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:33:20.547610  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:33:20.558373  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:33:20.558424  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:33:20.566388  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:33:20.566427  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:33:20.729151  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:20.781037  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:20.781265  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:20.850891  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:37:22.170348  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:37:22.170473  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 01:37:22.173668  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:37:22.173765  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:37:22.173849  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:37:22.173919  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:37:22.173985  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:37:22.174061  764048 kubeadm.go:322] OS: Linux
	I0223 01:37:22.174159  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:37:22.174260  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:37:22.174347  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:37:22.174416  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:37:22.174494  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:37:22.174580  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:37:22.174682  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:37:22.174824  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:37:22.174918  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:37:22.175001  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:37:22.175091  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:37:22.175146  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:37:22.175219  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:37:22.178003  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:37:22.178119  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:37:22.178193  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:37:22.178302  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:37:22.178387  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:37:22.178478  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:37:22.178552  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:37:22.178641  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:37:22.178748  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:37:22.178857  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:37:22.178961  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:37:22.179025  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:37:22.179093  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:37:22.179146  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:37:22.179223  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:37:22.179324  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:37:22.179381  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:37:22.179437  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:37:22.181274  764048 out.go:204]   - Booting up control plane ...
	I0223 01:37:22.181375  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:37:22.181453  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:37:22.181527  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:37:22.181637  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:37:22.181807  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:37:22.181876  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:37:22.181886  764048 kubeadm.go:322] 
	I0223 01:37:22.181942  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:37:22.182003  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:37:22.182013  764048 kubeadm.go:322] 
	I0223 01:37:22.182075  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:37:22.182121  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:37:22.182283  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:37:22.182302  764048 kubeadm.go:322] 
	I0223 01:37:22.182461  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:37:22.182511  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:37:22.182563  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:37:22.182575  764048 kubeadm.go:322] 
	I0223 01:37:22.182695  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:37:22.182775  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:37:22.182859  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:37:22.182908  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:37:22.183006  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:37:22.183099  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:37:22.183153  764048 kubeadm.go:406] StartCluster complete in 12m22.714008739s
	I0223 01:37:22.183276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:37:22.201132  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.201156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:37:22.201204  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:37:22.217542  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.217566  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:37:22.217616  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:37:22.234150  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.234171  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:37:22.234219  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:37:22.250946  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.250970  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:37:22.251013  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:37:22.268791  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.268815  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:37:22.268861  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:37:22.285465  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.285490  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:37:22.285540  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:37:22.300896  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.300922  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:37:22.300966  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:37:22.318198  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.318231  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:37:22.318247  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:37:22.318263  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:37:22.344168  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:37:22.344203  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:37:22.403384  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:37:22.403409  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:37:22.403422  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:37:22.420357  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:37:22.420386  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:37:22.457253  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:37:22.457281  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:37:22.486720  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.488920  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.490985  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.500879  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:12 old-k8s-version-799707 kubelet[11323]: E0223 01:37:12.661582   11323 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:37:22.507247  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:16 old-k8s-version-799707 kubelet[11323]: E0223 01:37:16.660990   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.509845  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.661645   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.509984  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.662744   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.517459  764048 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 01:37:22.517494  764048 out.go:239] * 
	W0223 01:37:22.517554  764048 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.517575  764048 out.go:239] * 
	W0223 01:37:22.518396  764048 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 01:37:22.521264  764048 out.go:177] X Problems detected in kubelet:
	I0223 01:37:22.522757  764048 out.go:177]   Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:37:22.525145  764048 out.go:177]   Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:37:22.526737  764048 out.go:177]   Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:37:22.529582  764048 out.go:177] 
	W0223 01:37:22.531019  764048 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.531067  764048 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 01:37:22.531087  764048 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 01:37:22.532677  764048 out.go:177] 
	
	
	==> Docker <==
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Stopping Docker Application Container Engine...
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.103151130Z" level=info msg="Processing signal 'terminated'"
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.104505379Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.105446277Z" level=info msg="Daemon shutdown complete"
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: docker.service: Deactivated successfully.
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Stopped Docker Application Container Engine.
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Starting Docker Application Container Engine...
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:56.156401519Z" level=info msg="Starting up"
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:56.174000685Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.449651506Z" level=info msg="Loading containers: start."
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.550676031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.586610824Z" level=info msg="Loading containers: done."
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.596597841Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.596662086Z" level=info msg="Daemon has completed initialization"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.617669481Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.617720354Z" level=info msg="API listen on [::]:2376"
	Feb 23 01:24:58 old-k8s-version-799707 systemd[1]: Started Docker Application Container Engine.
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.150527368Z" level=info msg="ignoring event" container=47668c78cdcb1fce2bb766c0cc09b16a2b0c61141d55b119ba43d7783590e950 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.212846569Z" level=info msg="ignoring event" container=0750d0692fa246cbba2bfa199447688b11d7ef4e766d4cfee3f719b8fabb4d10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.274216714Z" level=info msg="ignoring event" container=af61d65ff239c9d8d9c5f51a91457866fe6c7ec9cd20158c6209df57234f97eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.336509826Z" level=info msg="ignoring event" container=d60ff522117b24ef563225226ccde3f77f3cb9c3357213d2f1251c0458cac926 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.323035930Z" level=info msg="ignoring event" container=200cf4da53a64c3709b8e625771b2d40b06e1c3c2dfb1919cf2308015d9d6023 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.389474604Z" level=info msg="ignoring event" container=8736a711f3850a76b0836c4dd74120a343f30737a20f4d1d7f646e921b6fcde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.451511719Z" level=info msg="ignoring event" container=5a61bb3f909310bec9e2b894421396cc9569df4766d9beab54d8518ece47561b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.513908192Z" level=info msg="ignoring event" container=11a291a89617ea6f0f076c0e7f0b8512ad4d8b29db67ced225b3fbc08a154a1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 40 dd 8b cc 1d 08 06
	[Feb23 01:21] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 4a 0e 7e 15 d5 08 06
	[  +0.181916] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 bb cb 5d 9c af 08 06
	[  +6.500772] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 7d 73 be 05 49 08 06
	[ +15.142601] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d6 67 fc 1f c4 25 08 06
	[Feb23 01:22] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 26 b6 c3 e3 30 08 06
	[  +8.036365] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 e6 83 29 6d 96 08 06
	[  +0.087440] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 d1 55 83 c1 4e 08 06
	[  +1.229927] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 62 6e c8 47 3f 08 06
	[  +8.749689] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 4f 42 15 1f bb 08 06
	[Feb23 01:23] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 2f 4d 78 36 ec 08 06
	[Feb23 01:27] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da ab 2f 5a 1b 4a 08 06
	[  +9.876056] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 8a f3 8a e1 ab 08 06
	
	
	==> kernel <==
	 01:37:23 up  2:19,  0 users,  load average: 0.08, 0.29, 1.16
	Linux old-k8s-version-799707 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.055356   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.155539   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.200611   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.255734   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.355895   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.401544   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.456057   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.556256   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.601659   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.656493   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.756670   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.801314   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.856856   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:22 old-k8s-version-799707 kubelet[11323]: E0223 01:37:22.957014   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.001742   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.057225   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.157425   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.201415   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.257628   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.357823   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.402358   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.458014   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.558174   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.602416   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:37:23 old-k8s-version-799707 kubelet[11323]: E0223 01:37:23.658355   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 2 (293.204551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-799707" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (757.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (454.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:37:26.140080  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/default-k8s-diff-port-643873/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:37:43.126680  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:37:50.470338  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:38:13.560498  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:38:35.079489  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:39:15.087522  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:39:21.125275  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:39:26.322213  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:39:40.457029  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:40:46.241219  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:40:47.666433  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:41:01.514316  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:41:05.270851  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:41:55.031285  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:41:58.455567  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/default-k8s-diff-port-643873/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:42:24.560356  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:42:43.126636  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:42:50.470224  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:43:13.560735  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:43:35.079791  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:43:50.710582  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:44:15.087250  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:44:21.124489  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:44:26.321964  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0223 01:44:40.456949  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 2 (276.372358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-799707" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-799707
helpers_test.go:235: (dbg) docker inspect old-k8s-version-799707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef",
	        "Created": "2024-02-23T01:15:05.474444114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 764330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-23T01:24:47.445426862Z",
	            "FinishedAt": "2024-02-23T01:24:45.932121046Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/hosts",
	        "LogPath": "/var/lib/docker/containers/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef/f679df36dcf990835f9af969714a9f7aef6a6f4e8756578784e54dc46c3ccfef-json.log",
	        "Name": "/old-k8s-version-799707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-799707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-799707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68-init/diff:/var/lib/docker/overlay2/b6c3064e580e9d3be1c1e7c2f22af1522ce3c491365d231a5e8d9c0e313889c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2722ea5301b511ffa3da3e66dbd1d633ae1270718cb4e5e318ff35486007a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-799707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-799707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-799707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-799707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "495a40141205d4b737de198208cda7ff4e29ad58e3734988072fdb79c40f1629",
	            "SandboxKey": "/var/run/docker/netns/495a40141205",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-799707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f679df36dcf9",
	                        "old-k8s-version-799707"
	                    ],
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "bd295bc817aac655859be5f1040d2c41b5d0e7f3be9c06731d2af745450199fa",
	                    "EndpointID": "c759a40b24c96fa9e217e997f388484d82cda8c2ddb821b96919ecc179490888",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-799707",
	                        "f679df36dcf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 2 (271.782141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-799707 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-643873       | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-538058             | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-538058                  | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-538058 --memory=2200 --alsologtostderr   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:22 UTC | 23 Feb 24 01:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-538058 image list                           | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	| delete  | -p newest-cni-538058                                   | newest-cni-538058            | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC | 23 Feb 24 01:23 UTC |
	| addons  | enable metrics-server -p old-k8s-version-799707        | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-799707                              | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC | 23 Feb 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-799707             | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC | 23 Feb 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-799707                              | old-k8s-version-799707       | jenkins | v1.32.0 | 23 Feb 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-039066 image list                          | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	| delete  | -p embed-certs-039066                                  | embed-certs-039066           | jenkins | v1.32.0 | 23 Feb 24 01:26 UTC | 23 Feb 24 01:26 UTC |
	| image   | default-k8s-diff-port-643873                           | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-643873 | jenkins | v1.32.0 | 23 Feb 24 01:31 UTC | 23 Feb 24 01:31 UTC |
	|         | default-k8s-diff-port-643873                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 01:24:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 01:24:47.003793  764048 out.go:291] Setting OutFile to fd 1 ...
	I0223 01:24:47.004093  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004104  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:24:47.004109  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:24:47.004297  764048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 01:24:47.004973  764048 out.go:298] Setting JSON to false
	I0223 01:24:47.006519  764048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7636,"bootTime":1708643851,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 01:24:47.006586  764048 start.go:139] virtualization: kvm guest
	I0223 01:24:47.008747  764048 out.go:177] * [old-k8s-version-799707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 01:24:47.010551  764048 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 01:24:47.011904  764048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 01:24:47.010620  764048 notify.go:220] Checking for updates...
	I0223 01:24:47.014507  764048 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:47.015864  764048 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 01:24:47.017138  764048 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 01:24:47.018411  764048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 01:24:47.020066  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:47.021857  764048 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0223 01:24:47.023120  764048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 01:24:47.046565  764048 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 01:24:47.046673  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.099610  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.089716386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.099718  764048 docker.go:295] overlay module found
	I0223 01:24:47.101615  764048 out.go:177] * Using the docker driver based on existing profile
	I0223 01:24:47.102883  764048 start.go:299] selected driver: docker
	I0223 01:24:47.102897  764048 start.go:903] validating driver "docker" against &{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.102997  764048 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 01:24:47.103795  764048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 01:24:47.153625  764048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 01:24:47.144803249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 01:24:47.154044  764048 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 01:24:47.154166  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:47.154193  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:47.154210  764048 start_flags.go:323] config:
	{Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:47.156027  764048 out.go:177] * Starting control plane node old-k8s-version-799707 in cluster old-k8s-version-799707
	I0223 01:24:47.157370  764048 cache.go:121] Beginning downloading kic base image for docker with docker
	I0223 01:24:47.158890  764048 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0223 01:24:47.160251  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:47.160288  764048 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 01:24:47.160309  764048 cache.go:56] Caching tarball of preloaded images
	I0223 01:24:47.160343  764048 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0223 01:24:47.160431  764048 preload.go:174] Found /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 01:24:47.160444  764048 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 01:24:47.160574  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.176632  764048 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0223 01:24:47.176654  764048 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0223 01:24:47.176673  764048 cache.go:194] Successfully downloaded all kic artifacts
	I0223 01:24:47.176702  764048 start.go:365] acquiring machines lock for old-k8s-version-799707: {Name:mkec58acc477a1259ea890fef71c8d064abcdc6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 01:24:47.176766  764048 start.go:369] acquired machines lock for "old-k8s-version-799707" in 43.242µs
	I0223 01:24:47.176791  764048 start.go:96] Skipping create...Using existing machine configuration
	I0223 01:24:47.176797  764048 fix.go:54] fixHost starting: 
	I0223 01:24:47.177008  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.192721  764048 fix.go:102] recreateIfNeeded on old-k8s-version-799707: state=Stopped err=<nil>
	W0223 01:24:47.192746  764048 fix.go:128] unexpected machine state, will restart: <nil>
	I0223 01:24:47.194605  764048 out.go:177] * Restarting existing docker container for "old-k8s-version-799707" ...
	I0223 01:24:43.956785  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:45.957865  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:48.456454  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:45.509045  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:48.007627  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:47.195889  764048 cli_runner.go:164] Run: docker start old-k8s-version-799707
	I0223 01:24:47.452279  764048 cli_runner.go:164] Run: docker container inspect old-k8s-version-799707 --format={{.State.Status}}
	I0223 01:24:47.471747  764048 kic.go:430] container "old-k8s-version-799707" state is running.
	I0223 01:24:47.472285  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:47.489570  764048 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/config.json ...
	I0223 01:24:47.489761  764048 machine.go:88] provisioning docker machine ...
	I0223 01:24:47.489782  764048 ubuntu.go:169] provisioning hostname "old-k8s-version-799707"
	I0223 01:24:47.489818  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:47.506471  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:47.506715  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:47.506741  764048 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-799707 && echo "old-k8s-version-799707" | sudo tee /etc/hostname
	I0223 01:24:47.507401  764048 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40278->127.0.0.1:33414: read: connection reset by peer
	I0223 01:24:50.649171  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-799707
	
	I0223 01:24:50.649264  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.668220  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:50.668659  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:50.668690  764048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-799707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-799707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-799707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 01:24:50.798415  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:50.798446  764048 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
	I0223 01:24:50.798504  764048 ubuntu.go:177] setting up certificates
	I0223 01:24:50.798521  764048 provision.go:83] configureAuth start
	I0223 01:24:50.798581  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:50.815373  764048 provision.go:138] copyHostCerts
	I0223 01:24:50.815447  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
	I0223 01:24:50.815464  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
	I0223 01:24:50.815542  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
	I0223 01:24:50.815649  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
	I0223 01:24:50.815662  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
	I0223 01:24:50.815698  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
	I0223 01:24:50.815828  764048 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
	I0223 01:24:50.815845  764048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
	I0223 01:24:50.815883  764048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
	I0223 01:24:50.815954  764048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-799707 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-799707]
	I0223 01:24:50.956162  764048 provision.go:172] copyRemoteCerts
	I0223 01:24:50.956237  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 01:24:50.956294  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:50.973887  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.066745  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 01:24:51.088783  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 01:24:51.114161  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 01:24:51.136302  764048 provision.go:86] duration metric: configureAuth took 337.765346ms
	I0223 01:24:51.136338  764048 ubuntu.go:193] setting minikube options for container-runtime
	I0223 01:24:51.136542  764048 config.go:182] Loaded profile config "old-k8s-version-799707": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 01:24:51.136603  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.153110  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.153343  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.153360  764048 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 01:24:51.282447  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 01:24:51.282475  764048 ubuntu.go:71] root file system type: overlay
	I0223 01:24:51.282624  764048 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 01:24:51.282692  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.300243  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.300450  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.300510  764048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 01:24:51.445956  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 01:24:51.446035  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.464137  764048 main.go:141] libmachine: Using SSH client type: native
	I0223 01:24:51.464317  764048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I0223 01:24:51.464339  764048 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 01:24:51.599209  764048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 01:24:51.599236  764048 machine.go:91] provisioned docker machine in 4.109460251s
	I0223 01:24:51.599249  764048 start.go:300] post-start starting for "old-k8s-version-799707" (driver="docker")
	I0223 01:24:51.599259  764048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 01:24:51.599311  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 01:24:51.599368  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.617077  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.714796  764048 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 01:24:51.717878  764048 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 01:24:51.717913  764048 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 01:24:51.717926  764048 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 01:24:51.717935  764048 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0223 01:24:51.717949  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
	I0223 01:24:51.718015  764048 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
	I0223 01:24:51.718126  764048 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
	I0223 01:24:51.718238  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 01:24:51.726135  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:51.747990  764048 start.go:303] post-start completed in 148.727396ms
	I0223 01:24:51.748091  764048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 01:24:51.748133  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.764872  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.854725  764048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 01:24:51.858894  764048 fix.go:56] fixHost completed within 4.682089908s
	I0223 01:24:51.858929  764048 start.go:83] releasing machines lock for "old-k8s-version-799707", held for 4.682151168s
	I0223 01:24:51.858987  764048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-799707
	I0223 01:24:51.875113  764048 ssh_runner.go:195] Run: cat /version.json
	I0223 01:24:51.875169  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.875222  764048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 01:24:51.875284  764048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-799707
	I0223 01:24:51.892186  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.892603  764048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/old-k8s-version-799707/id_rsa Username:docker}
	I0223 01:24:51.981915  764048 ssh_runner.go:195] Run: systemctl --version
	I0223 01:24:52.071583  764048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 01:24:52.076094  764048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 01:24:52.076150  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 01:24:52.084570  764048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 01:24:52.093490  764048 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 01:24:52.093526  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.093556  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.093683  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.109388  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 01:24:52.119408  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 01:24:52.128541  764048 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.128617  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 01:24:52.138147  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.148648  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 01:24:52.157740  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 01:24:52.166291  764048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 01:24:52.174294  764048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 01:24:52.182560  764048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 01:24:52.191707  764048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 01:24:52.199478  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.279573  764048 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 01:24:52.364794  764048 start.go:475] detecting cgroup driver to use...
	I0223 01:24:52.364849  764048 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 01:24:52.364907  764048 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 01:24:52.378283  764048 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0223 01:24:52.378357  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 01:24:52.390249  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 01:24:52.407123  764048 ssh_runner.go:195] Run: which cri-dockerd
	I0223 01:24:52.410703  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 01:24:52.419413  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0223 01:24:52.436969  764048 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 01:24:52.538363  764048 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 01:24:52.641674  764048 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 01:24:52.641801  764048 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0223 01:24:52.672699  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:52.752635  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:53.005432  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:53.028950  764048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 01:24:50.956327  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:52.956439  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:50.507501  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:52.508735  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:53.053932  764048 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0223 01:24:53.054034  764048 cli_runner.go:164] Run: docker network inspect old-k8s-version-799707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 01:24:53.069369  764048 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0223 01:24:53.072991  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:53.082986  764048 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 01:24:53.083031  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:53.101057  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:53.101079  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:53.101131  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:53.109330  764048 ssh_runner.go:195] Run: which lz4
	I0223 01:24:53.112468  764048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0223 01:24:53.115371  764048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 01:24:53.115398  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0223 01:24:53.900009  764048 docker.go:649] Took 0.787557 seconds to copy over tarball
	I0223 01:24:53.900101  764048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 01:24:55.917765  764048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.017627982s)
	I0223 01:24:55.917798  764048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 01:24:55.986783  764048 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0223 01:24:55.995174  764048 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0223 01:24:56.012678  764048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 01:24:56.093644  764048 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 01:24:55.456910  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:57.956346  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:55.008744  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:57.508081  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:24:58.619686  764048 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.525997554s)
	I0223 01:24:58.619778  764048 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 01:24:58.638743  764048 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 01:24:58.638772  764048 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0223 01:24:58.638784  764048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0223 01:24:58.640360  764048 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0223 01:24:58.640468  764048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.640607  764048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.640677  764048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.640855  764048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.640978  764048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.641912  764048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.642118  764048 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.642279  764048 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0223 01:24:58.642467  764048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.642541  764048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.642661  764048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.642840  764048 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.643303  764048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.643387  764048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.643504  764048 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.801512  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0223 01:24:58.810449  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.822313  764048 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0223 01:24:58.822362  764048 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0223 01:24:58.822407  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0223 01:24:58.828135  764048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0223 01:24:58.828187  764048 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.828232  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0223 01:24:58.832726  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:24:58.841318  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0223 01:24:58.843598  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.845024  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.847494  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0223 01:24:58.863715  764048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0223 01:24:58.863770  764048 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863800  764048 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0223 01:24:58.863817  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0223 01:24:58.863840  764048 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.863881  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0223 01:24:58.877108  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.881932  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0223 01:24:58.883024  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0223 01:24:58.887720  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.888992  764048 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.896469  764048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0223 01:24:58.896520  764048 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.896568  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0223 01:24:58.909663  764048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0223 01:24:58.909718  764048 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909761  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0223 01:24:58.909764  764048 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0223 01:24:58.909801  764048 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.909863  764048 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0223 01:24:58.915957  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0223 01:24:58.930358  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0223 01:24:58.930531  764048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0223 01:24:58.930584  764048 cache_images.go:92] LoadImages completed in 291.787416ms
	W0223 01:24:58.930662  764048 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0223 01:24:58.930711  764048 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 01:24:59.002793  764048 cni.go:84] Creating CNI manager for ""
	I0223 01:24:59.002825  764048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0223 01:24:59.002849  764048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 01:24:59.002873  764048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-799707 NodeName:old-k8s-version-799707 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0223 01:24:59.003021  764048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-799707"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-799707
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.94.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 01:24:59.003101  764048 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-799707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 01:24:59.003150  764048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 01:24:59.011882  764048 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 01:24:59.011955  764048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 01:24:59.020226  764048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 01:24:59.036352  764048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 01:24:59.052765  764048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 01:24:59.068716  764048 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0223 01:24:59.071794  764048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 01:24:59.081516  764048 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707 for IP: 192.168.94.2
	I0223 01:24:59.081554  764048 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.081720  764048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
	I0223 01:24:59.081765  764048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
	I0223 01:24:59.081865  764048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/client.key
	I0223 01:24:59.081931  764048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key.ad8e880a
	I0223 01:24:59.081989  764048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key
	I0223 01:24:59.082135  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
	W0223 01:24:59.082182  764048 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
	I0223 01:24:59.082205  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 01:24:59.082240  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
	I0223 01:24:59.082275  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
	I0223 01:24:59.082304  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
	I0223 01:24:59.082383  764048 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
	I0223 01:24:59.083221  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 01:24:59.105664  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 01:24:59.127530  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 01:24:59.149110  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/old-k8s-version-799707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 01:24:59.171812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 01:24:59.194479  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 01:24:59.215613  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 01:24:59.236896  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 01:24:59.258380  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
	I0223 01:24:59.280812  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 01:24:59.303146  764048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
	I0223 01:24:59.325675  764048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 01:24:59.342098  764048 ssh_runner.go:195] Run: openssl version
	I0223 01:24:59.347196  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
	I0223 01:24:59.355998  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359380  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.359434  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
	I0223 01:24:59.366000  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
	I0223 01:24:59.373883  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
	I0223 01:24:59.383550  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386803  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.386851  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
	I0223 01:24:59.393159  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 01:24:59.401114  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 01:24:59.410493  764048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413720  764048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.413769  764048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 01:24:59.419835  764048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 01:24:59.428503  764048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0223 01:24:59.431930  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0223 01:24:59.438516  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0223 01:24:59.444802  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0223 01:24:59.451032  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0223 01:24:59.457355  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0223 01:24:59.463364  764048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0223 01:24:59.469151  764048 kubeadm.go:404] StartCluster: {Name:old-k8s-version-799707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-799707 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 01:24:59.469277  764048 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:24:59.486256  764048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 01:24:59.494482  764048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0223 01:24:59.494549  764048 kubeadm.go:636] restartCluster start
	I0223 01:24:59.494602  764048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 01:24:59.502465  764048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:24:59.503492  764048 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-799707" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:24:59.504158  764048 kubeconfig.go:146] "old-k8s-version-799707" context is missing from /home/jenkins/minikube-integration/18233-317564/kubeconfig - will repair!
	I0223 01:24:59.505058  764048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:24:59.506938  764048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 01:24:59.515443  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:24:59.515508  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:24:59.525625  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.016225  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.016379  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.026296  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.515803  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:00.515913  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:00.526179  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.015710  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.015775  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.026278  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:01.515779  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:01.515870  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:01.526346  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:00.456513  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.956094  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:00.007279  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.008550  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:04.507894  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:02.016199  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.026597  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:02.516181  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:02.516275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:02.526556  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.016094  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.016199  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.026612  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:03.516213  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:03.516295  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:03.527347  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.015853  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.015934  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.025845  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:04.516436  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:04.516520  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:04.526628  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.016168  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.016238  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.026961  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.515470  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:05.515565  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:05.525559  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.016173  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.016270  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.027029  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:06.515495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:06.515612  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:06.525687  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:05.456412  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.456833  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.007705  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:09.008286  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:07.015495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.015568  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.026678  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:07.516253  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:07.516337  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:07.526391  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.015899  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.015968  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.025911  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:08.516098  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:08.516167  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:08.526981  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.016463  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.016557  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.029165  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.516495  764048 api_server.go:166] Checking apiserver status ...
	I0223 01:25:09.516648  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 01:25:09.526971  764048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 01:25:09.527005  764048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0223 01:25:09.527018  764048 kubeadm.go:1135] stopping kube-system containers ...
	I0223 01:25:09.527081  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 01:25:09.546502  764048 docker.go:483] Stopping containers: [b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53]
	I0223 01:25:09.546580  764048 ssh_runner.go:195] Run: docker stop b2cc87eecf70 a9fc8445a236 12be4814f743 7c810d52cd53
	I0223 01:25:09.563682  764048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 01:25:09.576338  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:25:09.584800  764048 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 23 01:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 23 01:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 23 01:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 23 01:19 /etc/kubernetes/scheduler.conf
	
	I0223 01:25:09.584871  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 01:25:09.593154  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 01:25:09.601622  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 01:25:09.610963  764048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 01:25:09.618978  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627191  764048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 01:25:09.627226  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:09.680140  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.770745  764048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090560392s)
	I0223 01:25:10.770787  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:10.976122  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.038904  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 01:25:11.126325  764048 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:25:11.126417  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:11.626797  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:09.956223  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:11.957633  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:11.508301  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:14.007767  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:12.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:12.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.127338  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:13.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.127311  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.627274  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.126534  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:15.627263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.127298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:16.627307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:14.456395  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:16.456575  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:18.456739  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:16.507659  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:19.007262  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:17.127218  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:17.627134  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.127282  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:18.626855  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.127245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:19.627466  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.627329  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.127325  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:21.627266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:20.956537  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:22.956701  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:21.008120  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:23.508140  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:22.127189  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:22.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.126825  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:23.627188  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.126739  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:24.627267  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.127304  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.627260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.126891  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:26.626687  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:25.457141  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:27.956309  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:26.006787  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:28.007858  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:27.126498  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:27.626585  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:28.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.127312  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:29.627479  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.127263  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.127252  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:31.627251  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:30.456654  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:32.956862  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:30.508479  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:33.008156  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:32.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:32.627298  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.127260  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:33.627313  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.126749  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:34.626911  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.127303  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.626713  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.127324  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:36.626786  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:35.455801  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.456877  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:35.507519  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.508156  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:37.126523  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:37.627410  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.127109  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:38.627259  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.126994  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.626468  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.127319  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:40.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.127266  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:41.626871  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:39.956295  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.456582  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:40.007466  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.007689  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:44.507564  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:42.127062  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:42.627285  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.127532  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:43.627370  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.127314  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.627262  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.127243  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:45.627257  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.127476  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:46.627250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:44.956710  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:46.957028  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:46.507904  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:49.007521  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:47.126569  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:47.627291  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.126638  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:48.627296  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.126978  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.627247  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.127306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:50.626690  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.126800  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:51.627229  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:49.455814  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:51.456150  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:53.456683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:51.007905  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:53.507333  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:52.127250  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:52.627255  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.127231  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:53.627268  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.127330  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:54.627261  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.127327  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.627272  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.127307  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:56.626853  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:55.456774  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:57.956428  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:55.508530  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:58.006786  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:25:57.127275  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:57.627271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.127321  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:58.627294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.126813  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.627059  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.127271  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:00.627113  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.127202  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:01.626495  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:25:59.956705  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:01.956833  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:00.007279  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:02.007622  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:04.007691  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:02.126951  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:02.627276  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.127241  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:03.627284  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.127323  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.626588  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.126876  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:05.627245  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.127301  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:06.626519  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:04.456663  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:06.956019  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:06.007948  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:08.507404  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:07.127217  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:07.627286  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.126680  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:08.626774  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.127378  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:09.627060  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.126842  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:10.626792  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:11.126910  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:11.145763  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.145788  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:11.145831  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:11.165136  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.165170  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:11.165223  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:11.182783  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.182815  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:11.182870  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:11.200040  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.200505  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:11.200588  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:11.219336  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.219369  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:11.219481  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:11.236888  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.236916  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:11.236979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:11.255241  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.255276  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:11.255349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:11.273587  764048 logs.go:276] 0 containers: []
	W0223 01:26:11.273613  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:11.273625  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:11.273645  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:11.301874  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:11.301911  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:11.367953  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:11.367981  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:11.367999  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:11.384915  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:11.384948  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:11.423686  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:11.423719  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:11.443811  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:50 old-k8s-version-799707 kubelet[1655]: E0223 01:25:50.226291    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.446025  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:51 old-k8s-version-799707 kubelet[1655]: E0223 01:25:51.225450    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.448865  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.225297    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.449155  764048 logs.go:138] Found kubelet problem: Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.468475  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.468779  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.472671  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.475027  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483450  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483476  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:11.483545  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:11.483556  764048 out.go:239]   Feb 23 01:25:52 old-k8s-version-799707 kubelet[1655]: E0223 01:25:52.226403    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:11.483563  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:11.483646  764048 out.go:239]   Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:11.483662  764048 out.go:239]   Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:11.483695  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:11.483707  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:11.483716  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:08.956136  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:11.456613  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:11.007625  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:13.507430  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:13.956797  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:16.456765  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:16.007065  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:18.007931  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:21.485306  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:21.496387  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:21.514732  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.514762  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:21.514826  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:21.532743  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.532769  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:21.532815  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:21.550131  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.550159  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:21.550217  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:21.567723  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.567752  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:21.567810  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:21.586824  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.586864  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:21.586931  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:21.605250  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.605278  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:21.605328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:21.623380  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.623417  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:21.623494  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:21.641554  764048 logs.go:276] 0 containers: []
	W0223 01:26:21.641579  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:21.641593  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:21.641610  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:21.670812  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.226383    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.671137  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:03 old-k8s-version-799707 kubelet[1655]: E0223 01:26:03.227496    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.674833  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:05 old-k8s-version-799707 kubelet[1655]: E0223 01:26:05.225260    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:21.677109  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.691648  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.695439  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.695961  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.697979  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.703403  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:21.703431  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:21.730898  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:21.730932  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:21.792948  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:21.792972  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:21.792988  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:21.810167  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:21.810200  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:21.847886  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.847911  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:21.847973  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:21.847988  764048 out.go:239]   Feb 23 01:26:06 old-k8s-version-799707 kubelet[1655]: E0223 01:26:06.226718    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.847997  764048 out.go:239]   Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:21.848014  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:21.848024  764048 out.go:239]   Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:21.848034  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:21.848046  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:21.848068  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:18.955888  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:20.956550  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:23.456416  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:20.508236  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:23.007057  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:25.457015  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:27.956587  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:25.007665  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:27.507691  698728 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:28.007441  698728 pod_ready.go:81] duration metric: took 4m0.005882483s waiting for pod "metrics-server-57f55c9bc5-s48ls" in "kube-system" namespace to be "Ready" ...
	E0223 01:26:28.007462  698728 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0223 01:26:28.007470  698728 pod_ready.go:38] duration metric: took 4m1.599715489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:26:28.007495  698728 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:26:28.007565  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:28.025970  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:28.026043  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:28.043836  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:28.043912  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:28.060799  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:28.060875  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:28.079718  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:28.079798  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:28.097128  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:28.097206  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:28.115072  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:28.115157  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:28.133065  698728 logs.go:276] 0 containers: []
	W0223 01:26:28.133095  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:28.133154  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:28.151878  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:28.151971  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:28.169282  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:28.169321  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:28.169340  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:28.196325  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:28.196360  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:28.218355  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:28.218395  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:28.260721  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:28.260761  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:28.317909  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:28.317946  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:28.410906  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:28.410936  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:28.442190  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:28.442228  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:28.468887  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:28.468924  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:28.489618  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:28.489647  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:28.510600  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:28.510629  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:28.531980  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:28.532010  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:28.588173  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:28.588219  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:28.677392  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:28.677430  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:31.849099  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:31.860777  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:31.880217  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.880249  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:31.880321  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:31.900070  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.900104  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:31.900177  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:31.924832  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.924871  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:31.924926  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:31.943201  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.943233  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:31.943293  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:31.963632  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.963659  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:31.963718  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:31.981603  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.981631  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:31.981687  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:31.999354  764048 logs.go:276] 0 containers: []
	W0223 01:26:31.999385  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:31.999443  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:29.957264  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:32.457147  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:31.208447  698728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:31.222642  698728 api_server.go:72] duration metric: took 4m7.146676414s to wait for apiserver process to appear ...
	I0223 01:26:31.222673  698728 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:26:31.222765  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:31.241520  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:31.241613  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:31.259085  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:31.259167  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:31.278635  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:31.278707  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:31.296938  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:31.297024  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:31.316657  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:31.316743  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:31.336028  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:31.336114  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:31.353603  698728 logs.go:276] 0 containers: []
	W0223 01:26:31.353639  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:31.353698  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:31.371682  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:31.371764  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:31.391011  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:31.391050  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:31.391065  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:31.446950  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:31.446982  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:31.475094  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:31.475138  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:31.569351  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:31.569386  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:31.600500  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:31.600534  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:31.627728  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:31.627757  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:31.649569  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:31.649604  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:31.692582  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:31.692620  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:31.791576  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:31.791616  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:31.812623  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:31.812657  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:31.837853  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:31.837882  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:31.860409  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:31.860446  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:31.882327  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:31.882360  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:34.458751  698728 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0223 01:26:34.464178  698728 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0223 01:26:34.465641  698728 api_server.go:141] control plane version: v1.28.4
	I0223 01:26:34.465668  698728 api_server.go:131] duration metric: took 3.242982721s to wait for apiserver health ...
	I0223 01:26:34.465677  698728 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:26:34.465741  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:34.487279  698728 logs.go:276] 1 containers: [aa712cd089c3]
	I0223 01:26:34.487353  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:34.506454  698728 logs.go:276] 1 containers: [0a06962fa4e7]
	I0223 01:26:34.506534  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:34.526820  698728 logs.go:276] 1 containers: [7d17fc420a85]
	I0223 01:26:34.526900  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:34.548576  698728 logs.go:276] 1 containers: [5cac64efae58]
	I0223 01:26:34.548656  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:34.569299  698728 logs.go:276] 1 containers: [eb6e8796d89c]
	I0223 01:26:34.569387  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:34.589893  698728 logs.go:276] 1 containers: [bf8b54a25961]
	I0223 01:26:34.589967  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:34.611719  698728 logs.go:276] 0 containers: []
	W0223 01:26:34.611745  698728 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:34.611815  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:34.632584  698728 logs.go:276] 1 containers: [93cfc293740a]
	I0223 01:26:34.632673  698728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:26:34.651078  698728 logs.go:276] 1 containers: [73aaf28ba2ee]
	I0223 01:26:34.651122  698728 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:34.651137  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:26:32.017697  764048 logs.go:276] 0 containers: []
	W0223 01:26:32.017726  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:32.017740  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:32.017757  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:32.045068  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:15 old-k8s-version-799707 kubelet[1655]: E0223 01:26:15.226359    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.048789  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.226707    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.049261  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:17 old-k8s-version-799707 kubelet[1655]: E0223 01:26:17.227807    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:32.051257  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.064250  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.070945  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.073222  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.073688  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.075053  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:32.075076  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:32.101810  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:32.101851  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:32.162373  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:32.162404  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:32.162421  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:32.179945  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:32.179980  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:32.216971  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217002  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:32.217070  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:32.217085  764048 out.go:239]   Feb 23 01:26:18 old-k8s-version-799707 kubelet[1655]: E0223 01:26:18.226096    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217101  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:32.217112  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:32.217130  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:32.217144  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:26:32.217159  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:32.217167  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:34.740877  698728 logs.go:123] Gathering logs for etcd [0a06962fa4e7] ...
	I0223 01:26:34.740913  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a06962fa4e7"
	I0223 01:26:34.769168  698728 logs.go:123] Gathering logs for coredns [7d17fc420a85] ...
	I0223 01:26:34.769201  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d17fc420a85"
	I0223 01:26:34.791050  698728 logs.go:123] Gathering logs for kube-proxy [eb6e8796d89c] ...
	I0223 01:26:34.791083  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6e8796d89c"
	I0223 01:26:34.813591  698728 logs.go:123] Gathering logs for kube-controller-manager [bf8b54a25961] ...
	I0223 01:26:34.813625  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf8b54a25961"
	I0223 01:26:34.855060  698728 logs.go:123] Gathering logs for kubernetes-dashboard [93cfc293740a] ...
	I0223 01:26:34.855099  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93cfc293740a"
	I0223 01:26:34.880436  698728 logs.go:123] Gathering logs for storage-provisioner [73aaf28ba2ee] ...
	I0223 01:26:34.880463  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73aaf28ba2ee"
	I0223 01:26:34.900248  698728 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:34.900288  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:34.928856  698728 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:34.928895  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:26:35.024451  698728 logs.go:123] Gathering logs for kube-apiserver [aa712cd089c3] ...
	I0223 01:26:35.024483  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa712cd089c3"
	I0223 01:26:35.053946  698728 logs.go:123] Gathering logs for kube-scheduler [5cac64efae58] ...
	I0223 01:26:35.053982  698728 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cac64efae58"
	I0223 01:26:35.079503  698728 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:35.079536  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:35.134351  698728 logs.go:123] Gathering logs for container status ...
	I0223 01:26:35.134387  698728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:37.692865  698728 system_pods.go:59] 8 kube-system pods found
	I0223 01:26:37.692901  698728 system_pods.go:61] "coredns-5dd5756b68-p4fwd" [85a617ed-3344-4942-b1a0-765ff78a4925] Running
	I0223 01:26:37.692908  698728 system_pods.go:61] "etcd-embed-certs-039066" [e4638cee-d774-4316-879d-4d18434da56e] Running
	I0223 01:26:37.692913  698728 system_pods.go:61] "kube-apiserver-embed-certs-039066" [92d93d03-19b0-4ad6-854f-db215a4726fe] Running
	I0223 01:26:37.692918  698728 system_pods.go:61] "kube-controller-manager-embed-certs-039066" [2ef18956-2528-4f90-8d42-4d03fc02b3cc] Running
	I0223 01:26:37.692928  698728 system_pods.go:61] "kube-proxy-hmfbz" [f29b3a5e-06f8-484f-9f53-0a827c604e82] Running
	I0223 01:26:37.692933  698728 system_pods.go:61] "kube-scheduler-embed-certs-039066" [a89eac7f-c55a-4db6-8c33-a8eedf923225] Running
	I0223 01:26:37.692942  698728 system_pods.go:61] "metrics-server-57f55c9bc5-s48ls" [81101e57-c24a-4018-9994-f86d859d120b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:26:37.692948  698728 system_pods.go:61] "storage-provisioner" [1f190a7c-156a-46d4-884e-fe094b5d0ff5] Running
	I0223 01:26:37.692962  698728 system_pods.go:74] duration metric: took 3.227277265s to wait for pod list to return data ...
	I0223 01:26:37.692978  698728 default_sa.go:34] waiting for default service account to be created ...
	I0223 01:26:37.695595  698728 default_sa.go:45] found service account: "default"
	I0223 01:26:37.695622  698728 default_sa.go:55] duration metric: took 2.63602ms for default service account to be created ...
	I0223 01:26:37.695633  698728 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 01:26:37.700487  698728 system_pods.go:86] 8 kube-system pods found
	I0223 01:26:37.700514  698728 system_pods.go:89] "coredns-5dd5756b68-p4fwd" [85a617ed-3344-4942-b1a0-765ff78a4925] Running
	I0223 01:26:37.700520  698728 system_pods.go:89] "etcd-embed-certs-039066" [e4638cee-d774-4316-879d-4d18434da56e] Running
	I0223 01:26:37.700524  698728 system_pods.go:89] "kube-apiserver-embed-certs-039066" [92d93d03-19b0-4ad6-854f-db215a4726fe] Running
	I0223 01:26:37.700528  698728 system_pods.go:89] "kube-controller-manager-embed-certs-039066" [2ef18956-2528-4f90-8d42-4d03fc02b3cc] Running
	I0223 01:26:37.700532  698728 system_pods.go:89] "kube-proxy-hmfbz" [f29b3a5e-06f8-484f-9f53-0a827c604e82] Running
	I0223 01:26:37.700536  698728 system_pods.go:89] "kube-scheduler-embed-certs-039066" [a89eac7f-c55a-4db6-8c33-a8eedf923225] Running
	I0223 01:26:37.700542  698728 system_pods.go:89] "metrics-server-57f55c9bc5-s48ls" [81101e57-c24a-4018-9994-f86d859d120b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:26:37.700549  698728 system_pods.go:89] "storage-provisioner" [1f190a7c-156a-46d4-884e-fe094b5d0ff5] Running
	I0223 01:26:37.700557  698728 system_pods.go:126] duration metric: took 4.918ms to wait for k8s-apps to be running ...
	I0223 01:26:37.700564  698728 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 01:26:37.700614  698728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:26:37.712248  698728 system_svc.go:56] duration metric: took 11.67624ms WaitForService to wait for kubelet.
	I0223 01:26:37.712281  698728 kubeadm.go:581] duration metric: took 4m13.636322558s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 01:26:37.712309  698728 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:26:37.715299  698728 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:26:37.715322  698728 node_conditions.go:123] node cpu capacity is 8
	I0223 01:26:37.715337  698728 node_conditions.go:105] duration metric: took 3.021596ms to run NodePressure ...
	I0223 01:26:37.715351  698728 start.go:228] waiting for startup goroutines ...
	I0223 01:26:37.715360  698728 start.go:233] waiting for cluster config update ...
	I0223 01:26:37.715376  698728 start.go:242] writing updated cluster config ...
	I0223 01:26:37.715671  698728 ssh_runner.go:195] Run: rm -f paused
	I0223 01:26:37.764908  698728 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0223 01:26:37.766849  698728 out.go:177] * Done! kubectl is now configured to use "embed-certs-039066" cluster and "default" namespace by default
	I0223 01:26:34.957087  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:37.456374  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:39.456876  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:41.956264  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:42.219253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:42.229496  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:42.247555  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.247587  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:42.247642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:42.265205  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.265236  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:42.265284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:42.284632  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.284661  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:42.284719  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:42.302235  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.302263  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:42.302323  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:42.319683  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.319709  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:42.319767  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:42.338672  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.338696  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:42.338741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:42.356628  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.356654  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:42.356705  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:42.374290  764048 logs.go:276] 0 containers: []
	W0223 01:26:42.374319  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:42.374334  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:42.374348  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:42.408608  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.415731  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.418148  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.418679  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.435726  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.437740  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:42.437760  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:42.465460  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:42.465489  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:42.524278  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:42.524299  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:42.524312  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:42.540348  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:42.540377  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:42.578403  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578438  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:42.578496  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:42.578507  764048 out.go:239]   Feb 23 01:26:26 old-k8s-version-799707 kubelet[1655]: E0223 01:26:26.227167    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:42.578531  764048 out.go:239]   Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:42.578546  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:42.578551  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:42.578559  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:26:42.578573  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:42.578581  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:44.456381  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:46.456636  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:48.956089  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:50.956816  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:53.456420  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:52.580305  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:26:52.590732  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:26:52.607693  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.607725  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:26:52.607771  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:26:52.624842  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.624873  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:26:52.624922  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:26:52.642827  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.642852  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:26:52.642899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:26:52.660436  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.660462  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:26:52.660517  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:26:52.677507  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.677544  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:26:52.677610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:26:52.694555  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.694587  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:26:52.694642  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:26:52.712215  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.712248  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:26:52.712299  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:26:52.729809  764048 logs.go:276] 0 containers: []
	W0223 01:26:52.729833  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:26:52.729844  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:26:52.729857  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:26:52.748858  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:30 old-k8s-version-799707 kubelet[1655]: E0223 01:26:30.226647    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:26:52.751124  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.227855    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.752064  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.769290  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.772963  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.775019  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.777255  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.788895  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:26:52.788921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:26:52.815781  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:26:52.815820  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:26:52.875541  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:26:52.875571  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:26:52.875587  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:26:52.897948  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:26:52.897975  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:26:52.932891  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.932917  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:26:52.933044  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:26:52.933066  764048 out.go:239]   Feb 23 01:26:31 old-k8s-version-799707 kubelet[1655]: E0223 01:26:31.229288    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933075  764048 out.go:239]   Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:26:52.933087  764048 out.go:239]   Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:26:52.933099  764048 out.go:239]   Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:26:52.933108  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:26:52.933117  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:26:52.933127  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:26:55.956390  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace has status "Ready":"False"
	I0223 01:26:57.951411  747181 pod_ready.go:81] duration metric: took 4m0.00105371s waiting for pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace to be "Ready" ...
	E0223 01:26:57.951437  747181 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kwmp7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0223 01:26:57.951458  747181 pod_ready.go:38] duration metric: took 4m14.536189021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:26:57.951490  747181 kubeadm.go:640] restartCluster took 4m31.50180753s
	W0223 01:26:57.951564  747181 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0223 01:26:57.951596  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0223 01:27:04.486872  747181 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.535251417s)
	I0223 01:27:04.486936  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:27:04.497746  747181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:27:04.506004  747181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:27:04.506090  747181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:27:04.513948  747181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:27:04.513996  747181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:27:04.554467  747181 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0223 01:27:04.554541  747181 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:27:04.602705  747181 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:27:04.602819  747181 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:27:04.602903  747181 kubeadm.go:322] OS: Linux
	I0223 01:27:04.602969  747181 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:27:04.603052  747181 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:27:04.603098  747181 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:27:04.603140  747181 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:27:04.603216  747181 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:27:04.603299  747181 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:27:04.603388  747181 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0223 01:27:04.603465  747181 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0223 01:27:04.603522  747181 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0223 01:27:04.665758  747181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:27:04.665921  747181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:27:04.666112  747181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:27:04.932934  747181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:27:04.937774  747181 out.go:204]   - Generating certificates and keys ...
	I0223 01:27:04.937861  747181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:27:04.937928  747181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:27:04.937991  747181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:27:04.938057  747181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:27:04.938125  747181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:27:04.938196  747181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:27:04.938277  747181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:27:04.938382  747181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:27:04.938450  747181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:27:04.938515  747181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:27:04.938550  747181 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:27:04.938595  747181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:27:05.076940  747181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:27:05.229217  747181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:27:05.279726  747181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:27:05.475432  747181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:27:05.475893  747181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:27:05.478193  747181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:27:02.934253  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:02.945035  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:02.964813  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.964846  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:02.964914  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:02.985554  764048 logs.go:276] 0 containers: []
	W0223 01:27:02.985586  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:02.985643  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:03.003541  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.003573  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:03.003636  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:03.023214  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.023240  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:03.023296  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:03.043054  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.043085  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:03.043148  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:03.061854  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.061886  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:03.061941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:03.081342  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.081374  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:03.081428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:03.100486  764048 logs.go:276] 0 containers: []
	W0223 01:27:03.100514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:03.100528  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:03.100545  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:03.121342  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:41 old-k8s-version-799707 kubelet[1655]: E0223 01:26:41.224831    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.125184  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:43 old-k8s-version-799707 kubelet[1655]: E0223 01:26:43.226184    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:03.127641  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:44 old-k8s-version-799707 kubelet[1655]: E0223 01:26:44.225182    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.130747  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.145918  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.152913  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.153303  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.157613  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.166434  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:03.166466  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:03.196885  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:03.196921  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:03.265084  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:03.265110  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:03.265124  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:03.282530  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:03.282564  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:03.321418  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321443  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:03.321514  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:03.321527  764048 out.go:239]   Feb 23 01:26:45 old-k8s-version-799707 kubelet[1655]: E0223 01:26:45.225313    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321540  764048 out.go:239]   Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:03.321554  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:03.321563  764048 out.go:239]   Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:03.321573  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:03.321582  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:03.321593  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:05.480270  747181 out.go:204]   - Booting up control plane ...
	I0223 01:27:05.480397  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:27:05.480508  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:27:05.480602  747181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:27:05.492771  747181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:27:05.493384  747181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:27:05.493454  747181 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 01:27:05.575125  747181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:27:11.076961  747181 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501969 seconds
	I0223 01:27:11.077130  747181 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 01:27:11.089694  747181 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 01:27:11.608404  747181 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 01:27:11.608599  747181 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-643873 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 01:27:12.117387  747181 kubeadm.go:322] [bootstrap-token] Using token: euudkt.s0v7jwca9pwpsihr
	I0223 01:27:12.119012  747181 out.go:204]   - Configuring RBAC rules ...
	I0223 01:27:12.119158  747181 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 01:27:12.122902  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 01:27:12.130891  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 01:27:12.133452  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 01:27:12.136237  747181 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 01:27:12.139622  747181 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 01:27:12.149299  747181 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 01:27:12.353394  747181 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 01:27:12.576330  747181 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 01:27:12.577609  747181 kubeadm.go:322] 
	I0223 01:27:12.577688  747181 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 01:27:12.577694  747181 kubeadm.go:322] 
	I0223 01:27:12.577755  747181 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 01:27:12.577759  747181 kubeadm.go:322] 
	I0223 01:27:12.577779  747181 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 01:27:12.577826  747181 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 01:27:12.577867  747181 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 01:27:12.577871  747181 kubeadm.go:322] 
	I0223 01:27:12.577913  747181 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 01:27:12.577917  747181 kubeadm.go:322] 
	I0223 01:27:12.577964  747181 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 01:27:12.577969  747181 kubeadm.go:322] 
	I0223 01:27:12.578016  747181 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 01:27:12.578129  747181 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 01:27:12.578222  747181 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 01:27:12.578234  747181 kubeadm.go:322] 
	I0223 01:27:12.578348  747181 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 01:27:12.578455  747181 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 01:27:12.578464  747181 kubeadm.go:322] 
	I0223 01:27:12.578602  747181 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token euudkt.s0v7jwca9pwpsihr \
	I0223 01:27:12.578759  747181 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcbf83b93e1e99c3b9e337c3de6f53b35429b7347eaa8c3731469bde2d109270 \
	I0223 01:27:12.578791  747181 kubeadm.go:322] 	--control-plane 
	I0223 01:27:12.578802  747181 kubeadm.go:322] 
	I0223 01:27:12.578946  747181 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 01:27:12.578967  747181 kubeadm.go:322] 
	I0223 01:27:12.579076  747181 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token euudkt.s0v7jwca9pwpsihr \
	I0223 01:27:12.579208  747181 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcbf83b93e1e99c3b9e337c3de6f53b35429b7347eaa8c3731469bde2d109270 
	I0223 01:27:12.583047  747181 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:27:12.583199  747181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:27:12.583226  747181 cni.go:84] Creating CNI manager for ""
	I0223 01:27:12.583245  747181 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 01:27:12.585331  747181 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 01:27:12.586731  747181 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 01:27:12.597524  747181 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 01:27:12.616819  747181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 01:27:12.616909  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:12.616935  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=60a1754c54128d325d930960488a4adf9d1d6f25 minikube.k8s.io/name=default-k8s-diff-port-643873 minikube.k8s.io/updated_at=2024_02_23T01_27_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:12.890943  747181 ops.go:34] apiserver oom_adj: -16
	I0223 01:27:12.891096  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:13.391095  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:13.323129  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:13.333740  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:13.351749  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.351777  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:13.351843  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:13.369194  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.369219  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:13.369271  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:13.386603  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.386629  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:13.386698  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:13.404358  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.404389  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:13.404450  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:13.422585  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.422613  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:13.422674  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:13.440278  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.440309  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:13.440358  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:13.459814  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.459846  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:13.459901  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:13.477486  764048 logs.go:276] 0 containers: []
	W0223 01:27:13.477514  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:13.477529  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:13.477546  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:13.502463  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:13.502498  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:13.567760  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:13.567784  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:13.567802  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:13.586261  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:13.586292  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:13.630660  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:13.630698  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:13.653846  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:53 old-k8s-version-799707 kubelet[1655]: E0223 01:26:53.244805    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.660373  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.226747    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.660749  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:56 old-k8s-version-799707 kubelet[1655]: E0223 01:26:56.227855    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.664562  764048 logs.go:138] Found kubelet problem: Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.679481  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.680005  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.683875  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.689235  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691661  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691680  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:13.691742  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:13.691759  764048 out.go:239]   Feb 23 01:26:58 old-k8s-version-799707 kubelet[1655]: E0223 01:26:58.227654    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:13.691770  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:13.691778  764048 out.go:239]   Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:13.691784  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:13.691792  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:13.691801  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:13.691811  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:13.892092  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:14.392134  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:14.892180  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:15.391863  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:15.891413  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:16.391237  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:16.891344  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:17.392063  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:17.891863  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:18.391893  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:18.891539  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:19.391305  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:19.892120  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:20.391562  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:20.891956  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:21.391425  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:21.892176  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:22.391963  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:22.892204  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:23.391144  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:23.891307  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.392144  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.891266  747181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 01:27:24.972265  747181 kubeadm.go:1088] duration metric: took 12.355415474s to wait for elevateKubeSystemPrivileges.
	I0223 01:27:24.972304  747181 kubeadm.go:406] StartCluster complete in 4m58.548482532s
	I0223 01:27:24.972331  747181 settings.go:142] acquiring lock: {Name:mkdd07176a1016ae9ca7d71258b6199ead689cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:27:24.972428  747181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 01:27:24.973242  747181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/kubeconfig: {Name:mk5dc50cd20b0f8bda8ed11ebbad47615452aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 01:27:24.973480  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 01:27:24.973508  747181 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0223 01:27:24.973614  747181 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973633  747181 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973642  747181 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973650  747181 addons.go:243] addon storage-provisioner should already be in state true
	I0223 01:27:24.973662  747181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643873"
	I0223 01:27:24.973691  747181 config.go:182] Loaded profile config "default-k8s-diff-port-643873": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 01:27:24.973711  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.973738  747181 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973752  747181 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973759  747181 addons.go:243] addon dashboard should already be in state true
	I0223 01:27:24.973813  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.973906  747181 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643873"
	I0223 01:27:24.973929  747181 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-643873"
	W0223 01:27:24.973939  747181 addons.go:243] addon metrics-server should already be in state true
	I0223 01:27:24.973976  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:24.974030  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974241  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974303  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.974408  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:24.997976  747181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 01:27:24.999539  747181 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0223 01:27:24.999494  747181 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:27:25.002524  747181 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0223 01:27:25.001096  747181 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0223 01:27:25.001121  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 01:27:25.002156  747181 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-643873"
	W0223 01:27:25.003795  747181 addons.go:243] addon default-storageclass should already be in state true
	I0223 01:27:25.005144  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0223 01:27:25.003834  747181 host.go:66] Checking if "default-k8s-diff-port-643873" exists ...
	I0223 01:27:25.003861  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.003882  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0223 01:27:25.005351  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0223 01:27:25.005166  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0223 01:27:25.005412  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.005443  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.005657  747181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643873 --format={{.State.Status}}
	I0223 01:27:25.026979  747181 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 01:27:25.027006  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 01:27:25.027074  747181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643873
	I0223 01:27:25.027623  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.027765  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.027983  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.050392  747181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/default-k8s-diff-port-643873/id_rsa Username:docker}
	I0223 01:27:25.093241  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 01:27:25.194001  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 01:27:25.195219  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 01:27:25.195408  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0223 01:27:25.195425  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0223 01:27:25.197566  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0223 01:27:25.197583  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0223 01:27:25.372510  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0223 01:27:25.372546  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0223 01:27:25.382966  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0223 01:27:25.382994  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0223 01:27:25.485281  747181 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 01:27:25.485313  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0223 01:27:25.487050  747181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643873" context rescaled to 1 replicas
	I0223 01:27:25.487149  747181 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 01:27:25.489098  747181 out.go:177] * Verifying Kubernetes components...
	I0223 01:27:23.692473  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:23.703266  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:23.722231  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.722260  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:23.722328  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:23.740592  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.740625  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:23.740691  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:23.759630  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.759655  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:23.759701  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:23.777152  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.777182  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:23.777252  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:23.794715  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.794746  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:23.794812  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:23.812469  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.812494  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:23.812554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:23.830330  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.830357  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:23.830409  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:23.847767  764048 logs.go:276] 0 containers: []
	W0223 01:27:23.847791  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:23.847802  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:23.847813  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:23.864330  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:23.864362  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:23.900552  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:23.900582  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:23.935656  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.225637    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.936227  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:07 old-k8s-version-799707 kubelet[1655]: E0223 01:27:07.226752    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.940498  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:23.946760  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:23.957938  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:23.965312  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:23.967639  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:23.968659  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:23.968676  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:23.995207  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:23.995243  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:24.054134  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:24.054163  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054186  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:24.054242  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:24.054257  764048 out.go:239]   Feb 23 01:27:09 old-k8s-version-799707 kubelet[1655]: E0223 01:27:09.226379    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:24.054269  764048 out.go:239]   Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:24.054280  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:24.054294  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:24.054309  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:27:24.054321  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:24.054329  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:25.490732  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:27:25.572365  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0223 01:27:25.572399  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0223 01:27:25.771603  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0223 01:27:25.771636  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0223 01:27:25.777396  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 01:27:25.795849  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0223 01:27:25.795880  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0223 01:27:25.882916  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0223 01:27:25.882942  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0223 01:27:25.903798  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0223 01:27:25.903832  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0223 01:27:25.986660  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0223 01:27:25.986692  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0223 01:27:26.007706  747181 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 01:27:26.007738  747181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0223 01:27:26.088546  747181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 01:27:27.291666  747181 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.198371947s)
	I0223 01:27:27.291727  747181 start.go:929] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0223 01:27:27.672404  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478357421s)
	I0223 01:27:27.672499  747181 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.181704611s)
	I0223 01:27:27.672663  747181 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643873" to be "Ready" ...
	I0223 01:27:27.672491  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.47723657s)
	I0223 01:27:27.677947  747181 node_ready.go:49] node "default-k8s-diff-port-643873" has status "Ready":"True"
	I0223 01:27:27.677980  747181 node_ready.go:38] duration metric: took 5.279557ms waiting for node "default-k8s-diff-port-643873" to be "Ready" ...
	I0223 01:27:27.678035  747181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:27:27.687553  747181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.783675  747181 pod_ready.go:92] pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.783777  747181 pod_ready.go:81] duration metric: took 96.184241ms waiting for pod "coredns-5dd5756b68-58f8r" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.783802  747181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.794877  747181 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.794906  747181 pod_ready.go:81] duration metric: took 11.086164ms waiting for pod "etcd-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.794920  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.872561  747181 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.872594  747181 pod_ready.go:81] duration metric: took 77.664042ms waiting for pod "kube-apiserver-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.872612  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.879684  747181 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:27.879712  747181 pod_ready.go:81] duration metric: took 7.090402ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.879725  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rpb8" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:27.910434  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132982172s)
	I0223 01:27:27.910484  747181 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-643873"
	I0223 01:27:28.077337  747181 pod_ready.go:92] pod "kube-proxy-2rpb8" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:28.077366  747181 pod_ready.go:81] duration metric: took 197.632572ms waiting for pod "kube-proxy-2rpb8" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.077383  747181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.478162  747181 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace has status "Ready":"True"
	I0223 01:27:28.478191  747181 pod_ready.go:81] duration metric: took 400.797707ms waiting for pod "kube-scheduler-default-k8s-diff-port-643873" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.478218  747181 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace to be "Ready" ...
	I0223 01:27:28.621653  747181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.533051258s)
	I0223 01:27:28.623454  747181 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-643873 addons enable metrics-server
	
	I0223 01:27:28.624862  747181 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0223 01:27:28.626357  747181 addons.go:505] enable addons completed in 3.652850638s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0223 01:27:30.485483  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:32.986138  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:34.056179  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:34.068644  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:34.091576  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.091606  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:34.091662  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:34.112999  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.113029  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:34.113083  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:34.135911  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.135948  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:34.136009  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:34.155552  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.155584  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:34.155639  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:34.172644  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.172674  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:34.172731  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:34.193231  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.193261  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:34.193318  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:34.213564  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.213587  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:34.213632  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:34.234247  764048 logs.go:276] 0 containers: []
	W0223 01:27:34.234274  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:34.234288  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:34.234304  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:34.254068  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:34.254102  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:34.294146  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:34.294180  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:34.318533  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:12 old-k8s-version-799707 kubelet[1655]: E0223 01:27:12.224861    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.329296  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.339920  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.343536  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.350850  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.358682  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.367366  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:34.367396  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:34.403850  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:34.403915  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:34.479101  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:34.479131  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479144  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:34.479211  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:34.479227  764048 out.go:239]   Feb 23 01:27:18 old-k8s-version-799707 kubelet[1655]: E0223 01:27:18.225118    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:34.479247  764048 out.go:239]   Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:34.479266  764048 out.go:239]   Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:34.479275  764048 out.go:239]   Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:34.479284  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:34.479292  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:34.479304  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:35.485479  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:37.983973  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:39.984078  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:41.984604  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:44.481194  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:44.492741  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:44.510893  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.510919  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:44.510979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:44.528074  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.528099  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:44.528147  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:44.545615  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.545650  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:44.545711  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:44.562131  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.562157  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:44.562216  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:44.579943  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.579968  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:44.580032  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:44.597379  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.597405  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:44.597469  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:44.614583  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.614645  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:44.614736  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:44.632117  764048 logs.go:276] 0 containers: []
	W0223 01:27:44.632153  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:44.632167  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:44.632182  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:44.649949  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:22 old-k8s-version-799707 kubelet[1655]: E0223 01:27:22.225231    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.652196  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:23 old-k8s-version-799707 kubelet[1655]: E0223 01:27:23.225869    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.657147  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:26 old-k8s-version-799707 kubelet[1655]: E0223 01:27:26.225489    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.664845  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.673447  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.677423  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.682830  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.686738  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.690877  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:44.690909  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:44.719106  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:44.719147  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:44.778079  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:44.778107  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:44.778126  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:44.794656  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:44.794686  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:44.831247  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831275  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:44.831339  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:44.831351  764048 out.go:239]   Feb 23 01:27:30 old-k8s-version-799707 kubelet[1655]: E0223 01:27:30.229723    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:44.831360  764048 out.go:239]   Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:44.831371  764048 out.go:239]   Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:44.831379  764048 out.go:239]   Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:44.831390  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:27:44.831397  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:44.831405  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:44.484397  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:46.983766  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:48.984339  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:50.984548  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:53.485101  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:54.832552  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:27:54.843379  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:27:54.861974  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.862004  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:27:54.862082  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:27:54.880013  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.880054  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:27:54.880110  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:27:54.896746  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.896776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:27:54.896846  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:27:54.913796  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.913826  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:27:54.913899  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:27:54.931897  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.931928  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:27:54.931988  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:27:54.949435  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.949468  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:27:54.949534  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:27:54.966362  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.966386  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:27:54.966431  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:27:54.983954  764048 logs.go:276] 0 containers: []
	W0223 01:27:54.983982  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:27:54.983995  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:27:54.984011  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:27:54.999879  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:27:54.999907  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:27:55.037126  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:27:55.037156  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:27:55.059470  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:35 old-k8s-version-799707 kubelet[1655]: E0223 01:27:35.228540    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.063298  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:37 old-k8s-version-799707 kubelet[1655]: E0223 01:27:37.226087    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.068690  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:40 old-k8s-version-799707 kubelet[1655]: E0223 01:27:40.226093    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:27:55.072516  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.081122  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.090028  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.092291  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.092793  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.095603  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:27:55.095626  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:27:55.123414  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:27:55.123451  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:27:55.179936  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:27:55.179960  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.179971  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:27:55.180020  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:27:55.180032  764048 out.go:239]   Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180039  764048 out.go:239]   Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:27:55.180072  764048 out.go:239]   Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:27:55.180086  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:27:55.180105  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:27:55.180114  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:27:55.180124  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:27:55.984245  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:27:57.984659  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:00.483779  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:02.484843  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:05.181993  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:05.192424  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:05.210121  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.210156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:05.210200  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:05.228650  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.228675  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:05.228723  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:05.245884  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.245913  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:05.245979  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:05.262993  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.263028  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:05.263088  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:05.280340  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.280371  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:05.280435  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:05.297947  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.297970  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:05.298018  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:05.315334  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.315366  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:05.315425  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:05.332647  764048 logs.go:276] 0 containers: []
	W0223 01:28:05.332671  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:05.332681  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:05.332694  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:05.348614  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:05.348642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:05.384048  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:05.384079  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:05.402702  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:42 old-k8s-version-799707 kubelet[1655]: E0223 01:27:42.226578    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.411066  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:47 old-k8s-version-799707 kubelet[1655]: E0223 01:27:47.225464    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.419595  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:52 old-k8s-version-799707 kubelet[1655]: E0223 01:27:52.226954    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.421739  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.422302  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.430697  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.440486  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.442750  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.443073  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:05.443095  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:05.468968  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:05.469004  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:05.527294  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:05.527344  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527358  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:05.527423  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:05.527440  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:05.527456  764048 out.go:239]   Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:05.527471  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:05.527486  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:05.527501  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:05.527515  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:05.527523  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:04.983939  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:06.984331  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:08.984401  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:11.484559  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:15.528852  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:15.540245  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:15.557540  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.557566  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:15.557615  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:15.573753  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.573777  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:15.573835  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:15.590472  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.590500  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:15.590554  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:15.608537  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.608568  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:15.608647  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:15.624845  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.624875  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:15.624930  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:15.641988  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.642016  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:15.642095  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:15.660022  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.660052  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:15.660102  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:15.677241  764048 logs.go:276] 0 containers: []
	W0223 01:28:15.677266  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:15.677277  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:15.677291  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:15.703651  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:15.703682  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:15.762510  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:15.762531  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:15.762544  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:15.778772  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:15.778803  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:15.815612  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:15.815642  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:15.834932  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.226463    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.835453  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:53 old-k8s-version-799707 kubelet[1655]: E0223 01:27:53.228129    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.844214  764048 logs.go:138] Found kubelet problem: Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.854157  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.856473  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.861781  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.870466  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874488  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874509  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:15.874577  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:15.874592  764048 out.go:239]   Feb 23 01:27:58 old-k8s-version-799707 kubelet[1655]: E0223 01:27:58.224738    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:15.874601  764048 out.go:239]   Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:15.874613  764048 out.go:239]   Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:15.874627  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:15.874638  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:15.874649  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:15.874660  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:13.986081  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:16.484482  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:18.988090  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:21.484581  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:25.876148  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:25.886833  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:25.903865  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.903895  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:25.903941  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:25.921203  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.921229  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:25.921272  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:25.938748  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.938776  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:25.938825  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:25.956769  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.956792  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:25.956845  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:25.973495  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.973518  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:25.973561  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:25.992272  764048 logs.go:276] 0 containers: []
	W0223 01:28:25.992298  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:25.992349  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:26.010007  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.010030  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:26.010111  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:26.027042  764048 logs.go:276] 0 containers: []
	W0223 01:28:26.027073  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:26.027087  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:26.027103  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:26.083781  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:26.083807  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:26.083824  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:26.099963  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:26.099992  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:26.137069  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:26.137100  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:26.157617  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:04 old-k8s-version-799707 kubelet[1655]: E0223 01:28:04.224971    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.159983  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:05 old-k8s-version-799707 kubelet[1655]: E0223 01:28:05.226226    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.165342  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.174225  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.179523  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.185204  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.192290  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.197251  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:26.197274  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:26.222726  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222752  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:26.222806  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:26.222818  764048 out.go:239]   Feb 23 01:28:08 old-k8s-version-799707 kubelet[1655]: E0223 01:28:08.225649    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:26.222824  764048 out.go:239]   Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:26.222834  764048 out.go:239]   Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:26.222842  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:26.222853  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0223 01:28:26.222864  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:26.222870  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:23.984339  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:25.984436  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:28.484517  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:30.984871  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:33.483929  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:36.224294  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:36.234593  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:36.252123  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.252147  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:36.252201  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:36.270152  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.270181  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:36.270234  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:36.286776  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.286803  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:36.286857  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:36.303407  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.303443  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:36.303500  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:36.320332  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.320360  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:36.320402  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:36.337290  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.337318  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:36.337367  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:36.356032  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.356056  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:36.356109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:36.372883  764048 logs.go:276] 0 containers: []
	W0223 01:28:36.372909  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:36.372919  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:36.372931  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:36.388787  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:36.388825  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:36.424874  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:36.424910  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:36.445848  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:13 old-k8s-version-799707 kubelet[1655]: E0223 01:28:13.224415    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.451297  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:16 old-k8s-version-799707 kubelet[1655]: E0223 01:28:16.224453    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.456927  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.463893  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.471013  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.477862  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.485415  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.488865  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:36.488888  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:36.516057  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:36.516089  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:36.573623  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:36.573645  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573658  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:36.573725  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:36.573738  764048 out.go:239]   Feb 23 01:28:19 old-k8s-version-799707 kubelet[1655]: E0223 01:28:19.225025    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:36.573747  764048 out.go:239]   Feb 23 01:28:23 old-k8s-version-799707 kubelet[1655]: E0223 01:28:23.224857    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:36.573757  764048 out.go:239]   Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:36.573771  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:36.573783  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:28:36.573794  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:36.573807  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:35.484648  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:37.984373  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:39.984656  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:42.484920  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:46.575225  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:46.585661  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:46.602730  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.602756  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:46.602806  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:46.620030  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.620061  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:46.620109  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:46.637449  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.637478  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:46.637529  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:46.655302  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.655353  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:46.655405  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:46.672835  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.672859  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:46.672906  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:46.689042  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.689074  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:46.689128  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:46.705921  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.705949  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:46.706010  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:46.722399  764048 logs.go:276] 0 containers: []
	W0223 01:28:46.722429  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:46.722442  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:46.722459  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:46.778773  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:46.778800  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:46.778815  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:46.794759  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:46.794791  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:46.831175  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:46.831207  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:46.858565  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:27 old-k8s-version-799707 kubelet[1655]: E0223 01:28:27.225017    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.865386  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.871324  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.878984  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.881096  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.893561  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.894713  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:46.894737  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:46.920290  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920317  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:46.920373  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:46.920384  764048 out.go:239]   Feb 23 01:28:31 old-k8s-version-799707 kubelet[1655]: E0223 01:28:31.225671    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:46.920391  764048 out.go:239]   Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:46.920401  764048 out.go:239]   Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:46.920409  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:46.920418  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:28:46.920424  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:46.920432  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:44.984374  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:46.984544  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:49.484549  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:51.984309  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:56.921234  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:28:56.932263  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:28:56.950133  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.950165  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:28:56.950211  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:28:56.967513  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.967544  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:28:56.967610  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:28:56.985114  764048 logs.go:276] 0 containers: []
	W0223 01:28:56.985135  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:28:56.985190  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:28:57.001619  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.001645  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:28:57.001690  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:28:54.484395  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:56.484684  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:28:57.019356  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.019381  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:28:57.019428  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:28:57.036683  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.036711  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:28:57.036776  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:28:57.053460  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.053489  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:28:57.053536  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:28:57.070212  764048 logs.go:276] 0 containers: []
	W0223 01:28:57.070240  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:28:57.070253  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:28:57.070270  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:28:57.096008  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:28:57.096044  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:28:57.153794  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:28:57.153817  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:28:57.153833  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:28:57.170295  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:28:57.170328  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:28:57.205650  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:28:57.205677  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:28:57.227302  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:34 old-k8s-version-799707 kubelet[1655]: E0223 01:28:34.225887    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.234866  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:38 old-k8s-version-799707 kubelet[1655]: E0223 01:28:38.224878    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.236884  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.248965  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.254557  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.254822  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.263128  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.267869  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.267897  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:28:57.267963  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:28:57.267977  764048 out.go:239]   Feb 23 01:28:39 old-k8s-version-799707 kubelet[1655]: E0223 01:28:39.225341    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:28:57.267989  764048 out.go:239]   Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:28:57.267998  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:28:57.268008  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:28:57.268018  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:28:57.268026  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:28:57.268031  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:28:58.984324  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:00.984714  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:03.484718  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:05.984464  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:08.484534  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:07.269999  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:07.280827  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:29:07.297977  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.298005  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:29:07.298075  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:29:07.315186  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.315222  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:29:07.315276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:29:07.332204  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.332234  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:29:07.332284  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:29:07.349378  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.349407  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:29:07.349461  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:29:07.366248  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.366275  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:29:07.366340  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:29:07.384205  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.384229  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:29:07.384287  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:29:07.402600  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.402625  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:29:07.402678  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:29:07.420951  764048 logs.go:276] 0 containers: []
	W0223 01:29:07.420984  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:29:07.421000  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:29:07.421022  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:29:07.446613  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:29:07.446648  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:29:07.505820  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:29:07.505841  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:29:07.505859  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:29:07.521736  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:29:07.521819  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:29:07.559319  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:29:07.559353  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:29:07.583248  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:46 old-k8s-version-799707 kubelet[1655]: E0223 01:28:46.224962    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.588793  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.225927    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.589050  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.597224  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.605348  764048 logs.go:138] Found kubelet problem: Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.610814  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.612796  764048 logs.go:138] Found kubelet problem: Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619406  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619427  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0223 01:29:07.619490  764048 out.go:239] X Problems detected in kubelet:
	W0223 01:29:07.619501  764048 out.go:239]   Feb 23 01:28:49 old-k8s-version-799707 kubelet[1655]: E0223 01:28:49.227034    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:29:07.619510  764048 out.go:239]   Feb 23 01:28:54 old-k8s-version-799707 kubelet[1655]: E0223 01:28:54.225638    1655 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:29:07.619519  764048 out.go:239]   Feb 23 01:28:59 old-k8s-version-799707 kubelet[1655]: E0223 01:28:59.224549    1655 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:29:07.619526  764048 out.go:239]   Feb 23 01:29:02 old-k8s-version-799707 kubelet[1655]: E0223 01:29:02.224722    1655 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:29:07.619535  764048 out.go:239]   Feb 23 01:29:03 old-k8s-version-799707 kubelet[1655]: E0223 01:29:03.225000    1655 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:29:07.619540  764048 out.go:304] Setting ErrFile to fd 2...
	I0223 01:29:07.619547  764048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 01:29:10.485157  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:12.983098  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:14.985538  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:16.986317  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:17.620865  764048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:29:17.631202  764048 kubeadm.go:640] restartCluster took 4m18.136634178s
	W0223 01:29:17.631285  764048 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0223 01:29:17.631316  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:29:18.369723  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:29:18.380597  764048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 01:29:18.389648  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:29:18.389701  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:29:18.397500  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:29:18.397542  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:29:18.444581  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:29:18.444639  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:29:18.612172  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:29:18.612306  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:29:18.612397  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:29:18.612453  764048 kubeadm.go:322] OS: Linux
	I0223 01:29:18.612523  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:29:18.612593  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:29:18.612684  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:29:18.612758  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:29:18.612840  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:29:18.612911  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:29:18.685576  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:29:18.685704  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:29:18.685805  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:29:18.862281  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:29:18.863574  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:29:18.870417  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:29:18.940701  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:29:18.943092  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:29:18.943199  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:29:18.943290  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:29:18.943424  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:29:18.943551  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:29:18.943651  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:29:18.943746  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:29:18.943837  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:29:18.943942  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:29:18.944060  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:29:18.944168  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:29:18.944239  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:29:18.944323  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:29:19.128104  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:29:19.237894  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:29:19.392875  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:29:19.789723  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:29:19.790432  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:29:19.792764  764048 out.go:204]   - Booting up control plane ...
	I0223 01:29:19.792883  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:29:19.795900  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:29:19.796833  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:29:19.797487  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:29:19.801650  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:29:19.485472  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:21.984198  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:24.484136  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:26.983917  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:28.984372  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:30.984461  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:33.484393  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:35.984903  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:38.484472  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:40.984280  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:42.984351  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:44.984392  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:47.483908  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:49.484381  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:51.484601  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:53.983584  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:55.983823  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:57.984149  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:29:59.801941  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:30:00.484843  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:02.985193  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:05.486226  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:07.984524  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:10.484401  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:12.984552  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:15.484478  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:17.984565  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:19.984601  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:22.484162  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:24.984814  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:27.484004  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:29.484683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:31.484863  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:33.983622  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:35.984247  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:38.484031  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:40.484654  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:42.984693  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:45.484942  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:47.984073  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:49.984624  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:51.984683  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:54.484210  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:56.484709  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:30:58.484771  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:00.984626  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:03.484286  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:05.484917  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:07.984315  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:09.984491  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:12.484403  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:14.983684  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:16.984247  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:18.984493  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:21.484329  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:23.983852  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:25.984060  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:28.484334  747181 pod_ready.go:102] pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace has status "Ready":"False"
	I0223 01:31:28.484361  747181 pod_ready.go:81] duration metric: took 4m0.006134852s waiting for pod "metrics-server-57f55c9bc5-54cdb" in "kube-system" namespace to be "Ready" ...
	E0223 01:31:28.484372  747181 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0223 01:31:28.484380  747181 pod_ready.go:38] duration metric: took 4m0.806294848s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 01:31:28.484405  747181 api_server.go:52] waiting for apiserver process to appear ...
	I0223 01:31:28.484502  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:28.504509  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:28.504590  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:28.522143  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:28.522211  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:28.540493  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:28.540571  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:28.558804  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:28.558898  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:28.577087  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:28.577165  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:28.594722  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:28.594810  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:28.612317  747181 logs.go:276] 0 containers: []
	W0223 01:31:28.612349  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:28.612410  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:28.630536  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:28.630608  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:28.648473  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:28.648517  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:28.648531  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:28.674785  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:28.674816  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:28.713783  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:28.713815  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:28.767625  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:28.767659  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:28.855755  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:28.855794  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:28.881896  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:28.881929  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:28.910634  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:28.910668  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:28.934870  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:28.934903  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:28.955863  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:28.955890  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:29.015157  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:29.015197  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:29.105733  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:29.105760  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:29.125505  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:29.125532  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:29.146014  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:29.146043  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:31.667826  747181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 01:31:31.680599  747181 api_server.go:72] duration metric: took 4m6.193372853s to wait for apiserver process to appear ...
	I0223 01:31:31.680639  747181 api_server.go:88] waiting for apiserver healthz status ...
	I0223 01:31:31.680711  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:31.698617  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:31.698755  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:31.716225  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:31.716303  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:31.734194  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:31.734276  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:31.751527  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:31.751610  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:31.769553  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:31.769623  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:31.787456  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:31.787559  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:31.805210  747181 logs.go:276] 0 containers: []
	W0223 01:31:31.805236  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:31.805285  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:31.823185  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:31.823269  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:31.841289  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:31.841331  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:31.841349  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:31.933112  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:31.933146  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:31.964592  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:31.964630  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:31.986244  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:31.986279  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:32.026243  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:32.026283  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:32.047323  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:32.047357  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:32.102300  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:32.102343  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:32.157201  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:32.157237  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:32.248025  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:32.248082  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:32.274002  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:32.274033  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:32.294363  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:32.294396  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:32.319982  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:32.320015  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:32.340494  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:32.340523  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:34.869040  747181 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0223 01:31:34.873062  747181 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0223 01:31:34.874138  747181 api_server.go:141] control plane version: v1.28.4
	I0223 01:31:34.874163  747181 api_server.go:131] duration metric: took 3.193515753s to wait for apiserver health ...
	I0223 01:31:34.874174  747181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 01:31:34.874242  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:31:34.892077  747181 logs.go:276] 1 containers: [e3f269ae1d93]
	I0223 01:31:34.892136  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:31:34.911804  747181 logs.go:276] 1 containers: [f0e457a2e9eb]
	I0223 01:31:34.911916  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:31:34.929552  747181 logs.go:276] 1 containers: [aefa45a56f54]
	I0223 01:31:34.929639  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:31:34.948285  747181 logs.go:276] 1 containers: [af049e910b16]
	I0223 01:31:34.948397  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:31:34.965670  747181 logs.go:276] 1 containers: [6eaafcfb77d4]
	I0223 01:31:34.965764  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:31:34.983712  747181 logs.go:276] 1 containers: [c980112f54ec]
	I0223 01:31:34.983786  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:31:35.000413  747181 logs.go:276] 0 containers: []
	W0223 01:31:35.000441  747181 logs.go:278] No container was found matching "kindnet"
	I0223 01:31:35.000497  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 01:31:35.018154  747181 logs.go:276] 1 containers: [87a0e583f265]
	I0223 01:31:35.018220  747181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:31:35.035590  747181 logs.go:276] 1 containers: [d18a90a3d2d1]
	I0223 01:31:35.035631  747181 logs.go:123] Gathering logs for dmesg ...
	I0223 01:31:35.035647  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:31:35.060772  747181 logs.go:123] Gathering logs for coredns [aefa45a56f54] ...
	I0223 01:31:35.060804  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aefa45a56f54"
	I0223 01:31:35.080420  747181 logs.go:123] Gathering logs for kube-scheduler [af049e910b16] ...
	I0223 01:31:35.080455  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af049e910b16"
	I0223 01:31:35.106810  747181 logs.go:123] Gathering logs for kube-proxy [6eaafcfb77d4] ...
	I0223 01:31:35.106841  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6eaafcfb77d4"
	I0223 01:31:35.126949  747181 logs.go:123] Gathering logs for kube-controller-manager [c980112f54ec] ...
	I0223 01:31:35.126977  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c980112f54ec"
	I0223 01:31:35.168862  747181 logs.go:123] Gathering logs for Docker ...
	I0223 01:31:35.168899  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:31:35.224570  747181 logs.go:123] Gathering logs for kubelet ...
	I0223 01:31:35.224618  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 01:31:35.317591  747181 logs.go:123] Gathering logs for kube-apiserver [e3f269ae1d93] ...
	I0223 01:31:35.317642  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3f269ae1d93"
	I0223 01:31:35.348873  747181 logs.go:123] Gathering logs for etcd [f0e457a2e9eb] ...
	I0223 01:31:35.348921  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e457a2e9eb"
	I0223 01:31:35.373283  747181 logs.go:123] Gathering logs for storage-provisioner [87a0e583f265] ...
	I0223 01:31:35.373312  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87a0e583f265"
	I0223 01:31:35.392844  747181 logs.go:123] Gathering logs for kubernetes-dashboard [d18a90a3d2d1] ...
	I0223 01:31:35.392882  747181 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18a90a3d2d1"
	I0223 01:31:35.414101  747181 logs.go:123] Gathering logs for container status ...
	I0223 01:31:35.414134  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:31:35.467188  747181 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:31:35.467221  747181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 01:31:38.067133  747181 system_pods.go:59] 8 kube-system pods found
	I0223 01:31:38.067159  747181 system_pods.go:61] "coredns-5dd5756b68-58f8r" [4654ded8-e843-40c2-a043-51af70a0c073] Running
	I0223 01:31:38.067166  747181 system_pods.go:61] "etcd-default-k8s-diff-port-643873" [03e8b1b0-a66a-4001-9ba8-50a81823592e] Running
	I0223 01:31:38.067169  747181 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643873" [c7c0bdbb-d372-4753-92cc-f24fe3f7dcb7] Running
	I0223 01:31:38.067173  747181 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643873" [50e983b6-a2cd-4fb4-a23a-2ebb91a37b73] Running
	I0223 01:31:38.067176  747181 system_pods.go:61] "kube-proxy-2rpb8" [dcc39424-df06-4bf0-b617-7f1e34633991] Running
	I0223 01:31:38.067180  747181 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643873" [5b74d719-d554-4cba-bf75-72c5fd1b6b9f] Running
	I0223 01:31:38.067186  747181 system_pods.go:61] "metrics-server-57f55c9bc5-54cdb" [8e42f000-1c93-462c-966c-ce0f162cac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:31:38.067191  747181 system_pods.go:61] "storage-provisioner" [6d6131ed-db27-4bdd-8645-38ef42ddb1a8] Running
	I0223 01:31:38.067199  747181 system_pods.go:74] duration metric: took 3.193019209s to wait for pod list to return data ...
	I0223 01:31:38.067209  747181 default_sa.go:34] waiting for default service account to be created ...
	I0223 01:31:38.069384  747181 default_sa.go:45] found service account: "default"
	I0223 01:31:38.069405  747181 default_sa.go:55] duration metric: took 2.18944ms for default service account to be created ...
	I0223 01:31:38.069413  747181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 01:31:38.073877  747181 system_pods.go:86] 8 kube-system pods found
	I0223 01:31:38.073898  747181 system_pods.go:89] "coredns-5dd5756b68-58f8r" [4654ded8-e843-40c2-a043-51af70a0c073] Running
	I0223 01:31:38.073904  747181 system_pods.go:89] "etcd-default-k8s-diff-port-643873" [03e8b1b0-a66a-4001-9ba8-50a81823592e] Running
	I0223 01:31:38.073908  747181 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643873" [c7c0bdbb-d372-4753-92cc-f24fe3f7dcb7] Running
	I0223 01:31:38.073915  747181 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643873" [50e983b6-a2cd-4fb4-a23a-2ebb91a37b73] Running
	I0223 01:31:38.073919  747181 system_pods.go:89] "kube-proxy-2rpb8" [dcc39424-df06-4bf0-b617-7f1e34633991] Running
	I0223 01:31:38.073923  747181 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643873" [5b74d719-d554-4cba-bf75-72c5fd1b6b9f] Running
	I0223 01:31:38.073932  747181 system_pods.go:89] "metrics-server-57f55c9bc5-54cdb" [8e42f000-1c93-462c-966c-ce0f162cac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 01:31:38.073943  747181 system_pods.go:89] "storage-provisioner" [6d6131ed-db27-4bdd-8645-38ef42ddb1a8] Running
	I0223 01:31:38.073956  747181 system_pods.go:126] duration metric: took 4.534328ms to wait for k8s-apps to be running ...
	I0223 01:31:38.073969  747181 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 01:31:38.074020  747181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:31:38.085228  747181 system_svc.go:56] duration metric: took 11.252838ms WaitForService to wait for kubelet.
	I0223 01:31:38.085252  747181 kubeadm.go:581] duration metric: took 4m12.59802964s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 01:31:38.085280  747181 node_conditions.go:102] verifying NodePressure condition ...
	I0223 01:31:38.087554  747181 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 01:31:38.087572  747181 node_conditions.go:123] node cpu capacity is 8
	I0223 01:31:38.087583  747181 node_conditions.go:105] duration metric: took 2.293685ms to run NodePressure ...
	I0223 01:31:38.087594  747181 start.go:228] waiting for startup goroutines ...
	I0223 01:31:38.087605  747181 start.go:233] waiting for cluster config update ...
	I0223 01:31:38.087620  747181 start.go:242] writing updated cluster config ...
	I0223 01:31:38.087918  747181 ssh_runner.go:195] Run: rm -f paused
	I0223 01:31:38.136302  747181 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0223 01:31:38.139226  747181 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643873" cluster and "default" namespace by default
	I0223 01:33:19.803128  764048 kubeadm.go:322] 
	I0223 01:33:19.803259  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:33:19.803344  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:33:19.803356  764048 kubeadm.go:322] 
	I0223 01:33:19.803405  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:33:19.803459  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:33:19.803603  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:33:19.803628  764048 kubeadm.go:322] 
	I0223 01:33:19.803738  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:33:19.803768  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:33:19.803850  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:33:19.803871  764048 kubeadm.go:322] 
	I0223 01:33:19.803995  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:33:19.804094  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:33:19.804166  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:33:19.804208  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:33:19.804275  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:33:19.804316  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:33:19.807097  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:19.807290  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:19.807529  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:19.807675  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:33:19.807772  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:33:19.807870  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 01:33:19.808072  764048 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 01:33:19.808143  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 01:33:20.547610  764048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 01:33:20.558373  764048 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 01:33:20.558424  764048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 01:33:20.566388  764048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 01:33:20.566427  764048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 01:33:20.729151  764048 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 01:33:20.781037  764048 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0223 01:33:20.781265  764048 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0223 01:33:20.850891  764048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 01:37:22.170348  764048 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 01:37:22.170473  764048 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 01:37:22.173668  764048 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 01:37:22.173765  764048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 01:37:22.173849  764048 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 01:37:22.173919  764048 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0223 01:37:22.173985  764048 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0223 01:37:22.174061  764048 kubeadm.go:322] OS: Linux
	I0223 01:37:22.174159  764048 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 01:37:22.174260  764048 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 01:37:22.174347  764048 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 01:37:22.174416  764048 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 01:37:22.174494  764048 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 01:37:22.174580  764048 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 01:37:22.174682  764048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 01:37:22.174824  764048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 01:37:22.174918  764048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 01:37:22.175001  764048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 01:37:22.175091  764048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 01:37:22.175146  764048 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 01:37:22.175219  764048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 01:37:22.178003  764048 out.go:204]   - Generating certificates and keys ...
	I0223 01:37:22.178119  764048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 01:37:22.178193  764048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 01:37:22.178302  764048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 01:37:22.178387  764048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 01:37:22.178478  764048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 01:37:22.178552  764048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 01:37:22.178641  764048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 01:37:22.178748  764048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 01:37:22.178857  764048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 01:37:22.178961  764048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 01:37:22.179025  764048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 01:37:22.179093  764048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 01:37:22.179146  764048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 01:37:22.179223  764048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 01:37:22.179324  764048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 01:37:22.179381  764048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 01:37:22.179437  764048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 01:37:22.181274  764048 out.go:204]   - Booting up control plane ...
	I0223 01:37:22.181375  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 01:37:22.181453  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 01:37:22.181527  764048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 01:37:22.181637  764048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 01:37:22.181807  764048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 01:37:22.181876  764048 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 01:37:22.181886  764048 kubeadm.go:322] 
	I0223 01:37:22.181942  764048 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 01:37:22.182003  764048 kubeadm.go:322] 	timed out waiting for the condition
	I0223 01:37:22.182013  764048 kubeadm.go:322] 
	I0223 01:37:22.182075  764048 kubeadm.go:322] This error is likely caused by:
	I0223 01:37:22.182121  764048 kubeadm.go:322] 	- The kubelet is not running
	I0223 01:37:22.182283  764048 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 01:37:22.182302  764048 kubeadm.go:322] 
	I0223 01:37:22.182461  764048 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 01:37:22.182511  764048 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 01:37:22.182563  764048 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 01:37:22.182575  764048 kubeadm.go:322] 
	I0223 01:37:22.182695  764048 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 01:37:22.182775  764048 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 01:37:22.182859  764048 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 01:37:22.182908  764048 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 01:37:22.183006  764048 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 01:37:22.183099  764048 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 01:37:22.183153  764048 kubeadm.go:406] StartCluster complete in 12m22.714008739s
	I0223 01:37:22.183276  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 01:37:22.201132  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.201156  764048 logs.go:278] No container was found matching "kube-apiserver"
	I0223 01:37:22.201204  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 01:37:22.217542  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.217566  764048 logs.go:278] No container was found matching "etcd"
	I0223 01:37:22.217616  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 01:37:22.234150  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.234171  764048 logs.go:278] No container was found matching "coredns"
	I0223 01:37:22.234219  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 01:37:22.250946  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.250970  764048 logs.go:278] No container was found matching "kube-scheduler"
	I0223 01:37:22.251013  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 01:37:22.268791  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.268815  764048 logs.go:278] No container was found matching "kube-proxy"
	I0223 01:37:22.268861  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 01:37:22.285465  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.285490  764048 logs.go:278] No container was found matching "kube-controller-manager"
	I0223 01:37:22.285540  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 01:37:22.300896  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.300922  764048 logs.go:278] No container was found matching "kindnet"
	I0223 01:37:22.300966  764048 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 01:37:22.318198  764048 logs.go:276] 0 containers: []
	W0223 01:37:22.318231  764048 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0223 01:37:22.318247  764048 logs.go:123] Gathering logs for dmesg ...
	I0223 01:37:22.318263  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 01:37:22.344168  764048 logs.go:123] Gathering logs for describe nodes ...
	I0223 01:37:22.344203  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 01:37:22.403384  764048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 01:37:22.403409  764048 logs.go:123] Gathering logs for Docker ...
	I0223 01:37:22.403422  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0223 01:37:22.420357  764048 logs.go:123] Gathering logs for container status ...
	I0223 01:37:22.420386  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 01:37:22.457253  764048 logs.go:123] Gathering logs for kubelet ...
	I0223 01:37:22.457281  764048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 01:37:22.486720  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.488920  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.490985  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.500879  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:12 old-k8s-version-799707 kubelet[11323]: E0223 01:37:12.661582   11323 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-799707_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0223 01:37:22.507247  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:16 old-k8s-version-799707 kubelet[11323]: E0223 01:37:16.660990   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0223 01:37:22.509845  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.661645   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0223 01:37:22.509984  764048 logs.go:138] Found kubelet problem: Feb 23 01:37:17 old-k8s-version-799707 kubelet[11323]: E0223 01:37:17.662744   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0223 01:37:22.517459  764048 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 01:37:22.517494  764048 out.go:239] * 
	W0223 01:37:22.517554  764048 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.517575  764048 out.go:239] * 
	W0223 01:37:22.518396  764048 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 01:37:22.521264  764048 out.go:177] X Problems detected in kubelet:
	I0223 01:37:22.522757  764048 out.go:177]   Feb 23 01:37:04 old-k8s-version-799707 kubelet[11323]: E0223 01:37:04.661156   11323 pod_workers.go:191] Error syncing pod ada9a48d69d0428e9c15f5ad9c81ef07 ("etcd-old-k8s-version-799707_kube-system(ada9a48d69d0428e9c15f5ad9c81ef07)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0223 01:37:22.525145  764048 out.go:177]   Feb 23 01:37:05 old-k8s-version-799707 kubelet[11323]: E0223 01:37:05.661922   11323 pod_workers.go:191] Error syncing pod d7c490b6a42435ed7106e1a0ff029359 ("kube-apiserver-old-k8s-version-799707_kube-system(d7c490b6a42435ed7106e1a0ff029359)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0223 01:37:22.526737  764048 out.go:177]   Feb 23 01:37:06 old-k8s-version-799707 kubelet[11323]: E0223 01:37:06.662040   11323 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-799707_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0223 01:37:22.529582  764048 out.go:177] 
	W0223 01:37:22.531019  764048 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1051-gcp
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 01:37:22.531067  764048 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 01:37:22.531087  764048 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 01:37:22.532677  764048 out.go:177] 
	
	
	==> Docker <==
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Stopping Docker Application Container Engine...
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.103151130Z" level=info msg="Processing signal 'terminated'"
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.104505379Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[845]: time="2024-02-23T01:24:56.105446277Z" level=info msg="Daemon shutdown complete"
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: docker.service: Deactivated successfully.
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Stopped Docker Application Container Engine.
	Feb 23 01:24:56 old-k8s-version-799707 systemd[1]: Starting Docker Application Container Engine...
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:56.156401519Z" level=info msg="Starting up"
	Feb 23 01:24:56 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:56.174000685Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.449651506Z" level=info msg="Loading containers: start."
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.550676031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.586610824Z" level=info msg="Loading containers: done."
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.596597841Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.596662086Z" level=info msg="Daemon has completed initialization"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.617669481Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 01:24:58 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:24:58.617720354Z" level=info msg="API listen on [::]:2376"
	Feb 23 01:24:58 old-k8s-version-799707 systemd[1]: Started Docker Application Container Engine.
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.150527368Z" level=info msg="ignoring event" container=47668c78cdcb1fce2bb766c0cc09b16a2b0c61141d55b119ba43d7783590e950 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.212846569Z" level=info msg="ignoring event" container=0750d0692fa246cbba2bfa199447688b11d7ef4e766d4cfee3f719b8fabb4d10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.274216714Z" level=info msg="ignoring event" container=af61d65ff239c9d8d9c5f51a91457866fe6c7ec9cd20158c6209df57234f97eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:29:18 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:29:18.336509826Z" level=info msg="ignoring event" container=d60ff522117b24ef563225226ccde3f77f3cb9c3357213d2f1251c0458cac926 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.323035930Z" level=info msg="ignoring event" container=200cf4da53a64c3709b8e625771b2d40b06e1c3c2dfb1919cf2308015d9d6023 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.389474604Z" level=info msg="ignoring event" container=8736a711f3850a76b0836c4dd74120a343f30737a20f4d1d7f646e921b6fcde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.451511719Z" level=info msg="ignoring event" container=5a61bb3f909310bec9e2b894421396cc9569df4766d9beab54d8518ece47561b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 01:33:20 old-k8s-version-799707 dockerd[1071]: time="2024-02-23T01:33:20.513908192Z" level=info msg="ignoring event" container=11a291a89617ea6f0f076c0e7f0b8512ad4d8b29db67ced225b3fbc08a154a1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 40 dd 8b cc 1d 08 06
	[Feb23 01:21] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 4a 0e 7e 15 d5 08 06
	[  +0.181916] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 bb cb 5d 9c af 08 06
	[  +6.500772] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 7d 73 be 05 49 08 06
	[ +15.142601] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d6 67 fc 1f c4 25 08 06
	[Feb23 01:22] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 26 b6 c3 e3 30 08 06
	[  +8.036365] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 e6 83 29 6d 96 08 06
	[  +0.087440] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 d1 55 83 c1 4e 08 06
	[  +1.229927] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 62 6e c8 47 3f 08 06
	[  +8.749689] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 4f 42 15 1f bb 08 06
	[Feb23 01:23] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 2f 4d 78 36 ec 08 06
	[Feb23 01:27] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da ab 2f 5a 1b 4a 08 06
	[  +9.876056] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 8a f3 8a e1 ab 08 06
	
	
	==> kernel <==
	 01:44:57 up  2:27,  0 users,  load average: 0.05, 0.10, 0.72
	Linux old-k8s-version-799707 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.272708   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.372884   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.401455   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.473072   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.573277   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.601717   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.673521   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.773668   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.801269   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.873888   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:56 old-k8s-version-799707 kubelet[11323]: E0223 01:44:56.974112   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.002336   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.074278   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.174459   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.202125   11323 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.274629   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.374848   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.402292   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.475035   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.575200   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.602488   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.675383   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.775527   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.801847   11323 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-799707&limit=500&resourceVersion=0: dial tcp 192.168.94.2:8443: connect: connection refused
	Feb 23 01:44:57 old-k8s-version-799707 kubelet[11323]: E0223 01:44:57.875710   11323 kubelet.go:2267] node "old-k8s-version-799707" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 2 (291.441212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-799707" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (454.26s)

                                                
                                    

Test pass (299/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.17
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.2
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 4.83
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.87
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.2
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.17
30 TestBinaryMirror 0.75
31 TestOffline 55.19
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 135.7
38 TestAddons/parallel/Registry 15.52
39 TestAddons/parallel/Ingress 21.24
40 TestAddons/parallel/InspektorGadget 11.61
41 TestAddons/parallel/MetricsServer 5.63
42 TestAddons/parallel/HelmTiller 11.06
44 TestAddons/parallel/CSI 46.5
45 TestAddons/parallel/Headlamp 12.42
46 TestAddons/parallel/CloudSpanner 5.67
47 TestAddons/parallel/LocalPath 53.26
48 TestAddons/parallel/NvidiaDevicePlugin 6.44
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 11.09
54 TestCertOptions 29.5
55 TestCertExpiration 237.08
56 TestDockerFlags 31.65
57 TestForceSystemdFlag 31.45
58 TestForceSystemdEnv 37.41
60 TestKVMDriverInstallOrUpdate 3.65
64 TestErrorSpam/setup 21.51
65 TestErrorSpam/start 0.61
66 TestErrorSpam/status 0.88
67 TestErrorSpam/pause 1.19
68 TestErrorSpam/unpause 1.21
69 TestErrorSpam/stop 10.86
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 39.41
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 35.2
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.33
81 TestFunctional/serial/CacheCmd/cache/add_local 1
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.31
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 34.29
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 0.97
92 TestFunctional/serial/LogsFileCmd 1.01
93 TestFunctional/serial/InvalidService 4.18
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 10.83
97 TestFunctional/parallel/DryRun 0.53
98 TestFunctional/parallel/InternationalLanguage 0.21
99 TestFunctional/parallel/StatusCmd 1.06
103 TestFunctional/parallel/ServiceCmdConnect 8.63
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 26.83
107 TestFunctional/parallel/SSHCmd 0.54
108 TestFunctional/parallel/CpCmd 1.7
109 TestFunctional/parallel/MySQL 21.68
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.62
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
119 TestFunctional/parallel/License 0.24
120 TestFunctional/parallel/Version/short 0.09
121 TestFunctional/parallel/Version/components 0.74
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.92
127 TestFunctional/parallel/ImageCommands/Setup 0.91
128 TestFunctional/parallel/ServiceCmd/DeployApp 8.17
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.73
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.68
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.83
132 TestFunctional/parallel/ServiceCmd/List 0.46
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
135 TestFunctional/parallel/ProfileCmd/profile_list 0.43
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
138 TestFunctional/parallel/ServiceCmd/Format 0.45
139 TestFunctional/parallel/ServiceCmd/URL 0.44
140 TestFunctional/parallel/MountCmd/any-port 13.06
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.75
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.87
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.38
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
148 TestFunctional/parallel/DockerEnv/bash 0.95
149 TestFunctional/parallel/MountCmd/specific-port 2.29
151 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
154 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.26
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.16
156 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
157 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
161 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
162 TestFunctional/delete_addon-resizer_images 0.07
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestImageBuild/serial/Setup 21.97
169 TestImageBuild/serial/NormalBuild 1.1
170 TestImageBuild/serial/BuildWithBuildArg 0.75
171 TestImageBuild/serial/BuildWithDockerIgnore 0.51
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.51
180 TestJSONOutput/start/Command 39.62
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.5
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.42
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.68
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 28.19
206 TestKicCustomNetwork/use_default_bridge_network 24.27
207 TestKicExistingNetwork 24.14
208 TestKicCustomSubnet 23.22
209 TestKicStaticIP 26.71
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 52.62
214 TestMountStart/serial/StartWithMountFirst 6.16
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 8.92
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.45
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.18
221 TestMountStart/serial/RestartStopped 7.42
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 70.26
226 TestMultiNode/serial/DeployApp2Nodes 34.83
227 TestMultiNode/serial/PingHostFrom2Pods 0.78
228 TestMultiNode/serial/AddNode 18.18
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.28
231 TestMultiNode/serial/CopyFile 9.29
232 TestMultiNode/serial/StopNode 2.12
233 TestMultiNode/serial/StartAfterStop 11.51
234 TestMultiNode/serial/RestartKeepsNodes 91.19
235 TestMultiNode/serial/DeleteNode 4.64
236 TestMultiNode/serial/StopMultiNode 21.38
237 TestMultiNode/serial/RestartMultiNode 59.13
238 TestMultiNode/serial/ValidateNameConflict 26.93
243 TestPreload 147.53
245 TestScheduledStopUnix 98.23
246 TestSkaffold 115.19
248 TestInsufficientStorage 13.15
249 TestRunningBinaryUpgrade 64.51
252 TestMissingContainerUpgrade 137.14
254 TestStoppedBinaryUpgrade/Setup 0.55
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 39.1
257 TestStoppedBinaryUpgrade/Upgrade 99.15
258 TestNoKubernetes/serial/StartWithStopK8s 17.75
259 TestNoKubernetes/serial/Start 8.76
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
261 TestNoKubernetes/serial/ProfileList 7.65
262 TestNoKubernetes/serial/Stop 1.22
263 TestNoKubernetes/serial/StartNoArgs 7.03
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
285 TestPause/serial/Start 75.58
286 TestNetworkPlugins/group/auto/Start 65.94
287 TestPause/serial/SecondStartNoReconfiguration 38.01
288 TestPause/serial/Pause 0.48
289 TestPause/serial/VerifyStatus 0.3
290 TestPause/serial/Unpause 0.43
291 TestPause/serial/PauseAgain 0.67
292 TestPause/serial/DeletePaused 2.06
293 TestPause/serial/VerifyDeletedResources 0.62
294 TestNetworkPlugins/group/kindnet/Start 55.47
295 TestNetworkPlugins/group/auto/KubeletFlags 0.38
296 TestNetworkPlugins/group/auto/NetCatPod 9.48
297 TestNetworkPlugins/group/auto/DNS 0.13
298 TestNetworkPlugins/group/auto/Localhost 0.12
299 TestNetworkPlugins/group/auto/HairPin 0.11
300 TestNetworkPlugins/group/calico/Start 69.89
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/custom-flannel/Start 53.41
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.54
305 TestNetworkPlugins/group/kindnet/DNS 0.16
306 TestNetworkPlugins/group/kindnet/Localhost 0.15
307 TestNetworkPlugins/group/kindnet/HairPin 0.14
308 TestNetworkPlugins/group/false/Start 40.04
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.32
311 TestNetworkPlugins/group/calico/NetCatPod 10.2
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
314 TestNetworkPlugins/group/calico/DNS 0.13
315 TestNetworkPlugins/group/calico/Localhost 0.11
316 TestNetworkPlugins/group/calico/HairPin 0.11
317 TestNetworkPlugins/group/custom-flannel/DNS 0.13
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
320 TestNetworkPlugins/group/false/KubeletFlags 0.29
321 TestNetworkPlugins/group/false/NetCatPod 9.2
322 TestNetworkPlugins/group/enable-default-cni/Start 79.27
323 TestNetworkPlugins/group/false/DNS 0.16
324 TestNetworkPlugins/group/false/Localhost 0.12
325 TestNetworkPlugins/group/false/HairPin 0.13
326 TestNetworkPlugins/group/flannel/Start 58.21
327 TestNetworkPlugins/group/bridge/Start 43.4
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
330 TestNetworkPlugins/group/bridge/NetCatPod 10.19
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
332 TestNetworkPlugins/group/flannel/NetCatPod 8.2
333 TestNetworkPlugins/group/flannel/DNS 0.13
334 TestNetworkPlugins/group/flannel/Localhost 0.11
335 TestNetworkPlugins/group/flannel/HairPin 0.11
336 TestNetworkPlugins/group/bridge/DNS 0.13
337 TestNetworkPlugins/group/bridge/Localhost 0.11
338 TestNetworkPlugins/group/bridge/HairPin 0.11
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
344 TestNetworkPlugins/group/kubenet/Start 49.32
348 TestStartStop/group/no-preload/serial/FirstStart 51.55
349 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
350 TestNetworkPlugins/group/kubenet/NetCatPod 9.18
351 TestNetworkPlugins/group/kubenet/DNS 0.13
352 TestNetworkPlugins/group/kubenet/Localhost 0.11
353 TestNetworkPlugins/group/kubenet/HairPin 0.11
354 TestStartStop/group/no-preload/serial/DeployApp 8.24
355 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
356 TestStartStop/group/no-preload/serial/Stop 10.83
358 TestStartStop/group/embed-certs/serial/FirstStart 40.62
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
360 TestStartStop/group/no-preload/serial/SecondStart 315.68
361 TestStartStop/group/embed-certs/serial/DeployApp 8.22
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
363 TestStartStop/group/embed-certs/serial/Stop 10.66
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/embed-certs/serial/SecondStart 563.44
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
371 TestStartStop/group/no-preload/serial/Pause 2.65
373 TestStartStop/group/newest-cni/serial/FirstStart 36.69
374 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
376 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
378 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 559.76
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
381 TestStartStop/group/newest-cni/serial/Stop 10.77
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
383 TestStartStop/group/newest-cni/serial/SecondStart 26.17
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/newest-cni/serial/Pause 2.44
390 TestStartStop/group/old-k8s-version/serial/Stop 1.2
391 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
395 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
396 TestStartStop/group/embed-certs/serial/Pause 2.42
397 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
399 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
400 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.4
x
+
TestDownloadOnly/v1.16.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-438717 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-438717 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.171698614s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-438717
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-438717: exit status 85 (71.092762ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-438717 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |          |
	|         | -p download-only-438717        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 00:31:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 00:31:41.920613  324387 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:31:41.920784  324387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:41.920798  324387 out.go:304] Setting ErrFile to fd 2...
	I0223 00:31:41.920805  324387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:41.921020  324387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	W0223 00:31:41.921147  324387 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18233-317564/.minikube/config/config.json: open /home/jenkins/minikube-integration/18233-317564/.minikube/config/config.json: no such file or directory
	I0223 00:31:41.921700  324387 out.go:298] Setting JSON to true
	I0223 00:31:41.922705  324387 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4451,"bootTime":1708643851,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:31:41.922777  324387 start.go:139] virtualization: kvm guest
	I0223 00:31:41.925281  324387 out.go:97] [download-only-438717] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 00:31:41.925417  324387 notify.go:220] Checking for updates...
	W0223 00:31:41.925516  324387 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 00:31:41.927107  324387 out.go:169] MINIKUBE_LOCATION=18233
	I0223 00:31:41.928564  324387 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:31:41.929844  324387 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:31:41.931137  324387 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:31:41.932285  324387 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 00:31:41.934672  324387 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 00:31:41.934921  324387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:31:41.956296  324387 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:31:41.956429  324387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:42.005833  324387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-23 00:31:41.996380497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:42.005950  324387 docker.go:295] overlay module found
	I0223 00:31:42.007646  324387 out.go:97] Using the docker driver based on user configuration
	I0223 00:31:42.007674  324387 start.go:299] selected driver: docker
	I0223 00:31:42.007683  324387 start.go:903] validating driver "docker" against <nil>
	I0223 00:31:42.007773  324387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:42.056521  324387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-23 00:31:42.048208023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:42.056717  324387 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 00:31:42.057199  324387 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0223 00:31:42.057391  324387 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 00:31:42.059062  324387 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-438717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-438717
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-136207 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-136207 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.833261991s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-136207
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-136207: exit status 85 (75.162585ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-438717 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |                     |
	|         | -p download-only-438717        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| delete  | -p download-only-438717        | download-only-438717 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| start   | -o=json --download-only        | download-only-136207 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |                     |
	|         | -p download-only-136207        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 00:31:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 00:31:46.500257  324681 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:31:46.500394  324681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:46.500404  324681 out.go:304] Setting ErrFile to fd 2...
	I0223 00:31:46.500408  324681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:46.500610  324681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:31:46.501198  324681 out.go:298] Setting JSON to true
	I0223 00:31:46.502723  324681 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4456,"bootTime":1708643851,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:31:46.503153  324681 start.go:139] virtualization: kvm guest
	I0223 00:31:46.505034  324681 out.go:97] [download-only-136207] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 00:31:46.506381  324681 out.go:169] MINIKUBE_LOCATION=18233
	I0223 00:31:46.505179  324681 notify.go:220] Checking for updates...
	I0223 00:31:46.508878  324681 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:31:46.510310  324681 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:31:46.511492  324681 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:31:46.512702  324681 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 00:31:46.515057  324681 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 00:31:46.515270  324681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:31:46.537105  324681 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:31:46.537219  324681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:46.585561  324681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-23 00:31:46.575761108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:46.585721  324681 docker.go:295] overlay module found
	I0223 00:31:46.587526  324681 out.go:97] Using the docker driver based on user configuration
	I0223 00:31:46.587555  324681 start.go:299] selected driver: docker
	I0223 00:31:46.587563  324681 start.go:903] validating driver "docker" against <nil>
	I0223 00:31:46.587695  324681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:46.638890  324681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-23 00:31:46.630438346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:46.639069  324681 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 00:31:46.639570  324681 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0223 00:31:46.639749  324681 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 00:31:46.641650  324681 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-136207"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-136207
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-019291 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-019291 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.865812296s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-019291
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-019291: exit status 85 (75.027196ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-438717 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |                     |
	|         | -p download-only-438717           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| delete  | -p download-only-438717           | download-only-438717 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| start   | -o=json --download-only           | download-only-136207 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |                     |
	|         | -p download-only-136207           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| delete  | -p download-only-136207           | download-only-136207 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC | 23 Feb 24 00:31 UTC |
	| start   | -o=json --download-only           | download-only-019291 | jenkins | v1.32.0 | 23 Feb 24 00:31 UTC |                     |
	|         | -p download-only-019291           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/23 00:31:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 00:31:51.755034  324971 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:31:51.755161  324971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:51.755169  324971 out.go:304] Setting ErrFile to fd 2...
	I0223 00:31:51.755174  324971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:31:51.755383  324971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:31:51.755959  324971 out.go:298] Setting JSON to true
	I0223 00:31:51.756896  324971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4461,"bootTime":1708643851,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:31:51.756974  324971 start.go:139] virtualization: kvm guest
	I0223 00:31:51.758983  324971 out.go:97] [download-only-019291] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 00:31:51.760458  324971 out.go:169] MINIKUBE_LOCATION=18233
	I0223 00:31:51.759139  324971 notify.go:220] Checking for updates...
	I0223 00:31:51.763195  324971 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:31:51.764437  324971 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:31:51.765682  324971 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:31:51.767002  324971 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 00:31:51.769363  324971 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 00:31:51.769686  324971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:31:51.791804  324971 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:31:51.791921  324971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:51.840336  324971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-23 00:31:51.831135349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:51.840452  324971 docker.go:295] overlay module found
	I0223 00:31:51.842345  324971 out.go:97] Using the docker driver based on user configuration
	I0223 00:31:51.842370  324971 start.go:299] selected driver: docker
	I0223 00:31:51.842375  324971 start.go:903] validating driver "docker" against <nil>
	I0223 00:31:51.842461  324971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:31:51.888847  324971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-23 00:31:51.880680484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:31:51.889012  324971 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0223 00:31:51.889496  324971 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0223 00:31:51.889637  324971 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 00:31:51.891523  324971 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-019291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-019291
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-649623 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-649623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-649623
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-799981 --alsologtostderr --binary-mirror http://127.0.0.1:43367 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-799981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-799981
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (55.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-576073 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-576073 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (52.940923971s)
helpers_test.go:175: Cleaning up "offline-docker-576073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-576073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-576073: (2.25293705s)
--- PASS: TestOffline (55.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-342517
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-342517: exit status 85 (61.280058ms)

                                                
                                                
-- stdout --
	* Profile "addons-342517" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-342517"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-342517
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-342517: exit status 85 (62.834509ms)

                                                
                                                
-- stdout --
	* Profile "addons-342517" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-342517"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (135.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-342517 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-342517 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.694592418s)
--- PASS: TestAddons/Setup (135.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.672365ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gx5t7" [be39ef4d-37a8-40c5-8341-54ee9b221591] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003902625s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7cz5q" [0a19ca4f-9279-4121-9528-2915667d06d8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005827724s
addons_test.go:340: (dbg) Run:  kubectl --context addons-342517 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-342517 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-342517 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.644196834s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 ip
2024/02/23 00:34:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-342517 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-342517 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-342517 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6abdb128-01ac-411a-baf6-b50f86f7c91a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6abdb128-01ac-411a-baf6-b50f86f7c91a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003376949s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-342517 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-342517 addons disable ingress-dns --alsologtostderr -v=1: (1.843575462s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-342517 addons disable ingress --alsologtostderr -v=1: (8.079632334s)
--- PASS: TestAddons/parallel/Ingress (21.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l7wjh" [2dc14f66-23a3-481a-bd4f-358c1618ccfb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004251261s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-342517
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-342517: (5.606016968s)
--- PASS: TestAddons/parallel/InspektorGadget (11.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.08241ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-jtb8x" [a69722f5-38d9-428a-9a2d-290d4b407d4b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004483534s
addons_test.go:415: (dbg) Run:  kubectl --context addons-342517 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.296231ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gx48q" [84109e51-56bb-4164-83a4-b0d4f3aeb41e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005315672s
addons_test.go:473: (dbg) Run:  kubectl --context addons-342517 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-342517 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.580535942s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.431886ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-342517 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-342517 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [217b6e3e-44bc-45b0-9e95-b32f446bf5ad] Pending
helpers_test.go:344: "task-pv-pod" [217b6e3e-44bc-45b0-9e95-b32f446bf5ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [217b6e3e-44bc-45b0-9e95-b32f446bf5ad] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004183781s
addons_test.go:584: (dbg) Run:  kubectl --context addons-342517 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342517 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342517 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-342517 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-342517 delete pod task-pv-pod: (1.31348539s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-342517 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-342517 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-342517 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [de1bd235-14bb-440b-bd1d-add1d572eef0] Pending
helpers_test.go:344: "task-pv-pod-restore" [de1bd235-14bb-440b-bd1d-add1d572eef0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [de1bd235-14bb-440b-bd1d-add1d572eef0] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003663624s
addons_test.go:626: (dbg) Run:  kubectl --context addons-342517 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-342517 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-342517 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-342517 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.539364737s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-342517 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-342517 --alsologtostderr -v=1: (1.417321994s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-t4mq2" [edc8100d-5857-40be-803e-e2e8371d973c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-t4mq2" [edc8100d-5857-40be-803e-e2e8371d973c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003573127s
--- PASS: TestAddons/parallel/Headlamp (12.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-gbcmh" [d2c9d5de-4df6-4169-bfa8-2495ebea1ffc] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003555978s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-342517
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-342517 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-342517 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b9069363-bc04-449a-a5ca-0b48dcf7eebf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b9069363-bc04-449a-a5ca-0b48dcf7eebf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b9069363-bc04-449a-a5ca-0b48dcf7eebf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005478735s
addons_test.go:891: (dbg) Run:  kubectl --context addons-342517 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 ssh "cat /opt/local-path-provisioner/pvc-bcf24f0c-c4cb-49a0-96e7-6fa8a09bb1ca_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-342517 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-342517 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-342517 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-342517 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.281584586s)
--- PASS: TestAddons/parallel/LocalPath (53.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8m5xl" [cf158e2d-23fa-4376-a0b3-3791a690cf96] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004831283s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-342517
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-58nbf" [aec4a579-c205-41d6-b043-195502bcd968] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004158772s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-342517 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-342517 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-342517
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-342517: (10.818743735s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-342517
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-342517
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-342517
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (29.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-245773 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0223 01:08:35.080628  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-245773 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.205641128s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-245773 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-245773 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-245773 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-245773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-245773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-245773: (3.670246671s)
--- PASS: TestCertOptions (29.50s)

                                                
                                    
x
+
TestCertExpiration (237.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-197048 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-197048 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.858355996s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-197048 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0223 01:11:28.626777  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-197048 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.004781551s)
helpers_test.go:175: Cleaning up "cert-expiration-197048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-197048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-197048: (2.216423768s)
--- PASS: TestCertExpiration (237.08s)

                                                
                                    
x
+
TestDockerFlags (31.65s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-106051 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-106051 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.939542267s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-106051 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-106051 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-106051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-106051
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-106051: (2.089027981s)
--- PASS: TestDockerFlags (31.65s)

                                                
                                    
x
+
TestForceSystemdFlag (31.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-739290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-739290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.335639057s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-739290 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-739290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-739290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-739290: (5.65514773s)
--- PASS: TestForceSystemdFlag (31.45s)

                                                
                                    
x
+
TestForceSystemdEnv (37.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-580709 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-580709 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.734039734s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-580709 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-580709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-580709
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-580709: (2.298450931s)
--- PASS: TestForceSystemdEnv (37.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                    
x
+
TestErrorSpam/setup (21.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-499881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-499881 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-499881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-499881 --driver=docker  --container-runtime=docker: (21.507787019s)
--- PASS: TestErrorSpam/setup (21.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 unpause
--- PASS: TestErrorSpam/unpause (1.21s)

                                                
                                    
x
+
TestErrorSpam/stop (10.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 stop: (10.655176511s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-499881 --log_dir /tmp/nospam-499881 stop
--- PASS: TestErrorSpam/stop (10.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/test/nested/copy/324375/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-511250 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (39.411855059s)
--- PASS: TestFunctional/serial/StartWithProxy (39.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-511250 --alsologtostderr -v=8: (35.195770752s)
functional_test.go:659: soft start took 35.19664577s for "functional-511250" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-511250 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-511250 /tmp/TestFunctionalserialCacheCmdcacheadd_local1301341776/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache add minikube-local-cache-test:functional-511250
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache delete minikube-local-cache-test:functional-511250
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-511250
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.000094ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 kubectl -- --context functional-511250 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-511250 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-511250 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.289542475s)
functional_test.go:757: restart took 34.289706848s for "functional-511250" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-511250 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 logs --file /tmp/TestFunctionalserialLogsFileCmd1603533415/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 logs --file /tmp/TestFunctionalserialLogsFileCmd1603533415/001/logs.txt: (1.005468767s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-511250 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-511250
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-511250: exit status 115 (333.107149ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30855 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-511250 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 config get cpus: exit status 14 (65.414312ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 config get cpus: exit status 14 (62.103605ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-511250 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-511250 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 365811: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-511250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (229.680747ms)

                                                
                                                
-- stdout --
	* [functional-511250] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:38:45.443439  364774 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:38:45.443603  364774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:38:45.443612  364774 out.go:304] Setting ErrFile to fd 2...
	I0223 00:38:45.443620  364774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:38:45.443938  364774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:38:45.444626  364774 out.go:298] Setting JSON to false
	I0223 00:38:45.445869  364774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4874,"bootTime":1708643851,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:38:45.445949  364774 start.go:139] virtualization: kvm guest
	I0223 00:38:45.448486  364774 out.go:177] * [functional-511250] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 00:38:45.450108  364774 notify.go:220] Checking for updates...
	I0223 00:38:45.450117  364774 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 00:38:45.451706  364774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:38:45.453085  364774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:38:45.454360  364774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:38:45.455711  364774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 00:38:45.457571  364774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 00:38:45.459209  364774 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 00:38:45.459736  364774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:38:45.486258  364774 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:38:45.486386  364774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:38:45.574600  364774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-23 00:38:45.564016219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:38:45.575237  364774 docker.go:295] overlay module found
	I0223 00:38:45.583192  364774 out.go:177] * Using the docker driver based on existing profile
	I0223 00:38:45.584490  364774 start.go:299] selected driver: docker
	I0223 00:38:45.584518  364774 start.go:903] validating driver "docker" against &{Name:functional-511250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-511250 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 00:38:45.584650  364774 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 00:38:45.587111  364774 out.go:177] 
	W0223 00:38:45.588641  364774 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 00:38:45.590181  364774 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-511250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (206.878434ms)

                                                
                                                
-- stdout --
	* [functional-511250] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:38:46.131097  365100 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:38:46.131344  365100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:38:46.131379  365100 out.go:304] Setting ErrFile to fd 2...
	I0223 00:38:46.131406  365100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:38:46.131785  365100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:38:46.132330  365100 out.go:298] Setting JSON to false
	I0223 00:38:46.133599  365100 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4875,"bootTime":1708643851,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 00:38:46.133708  365100 start.go:139] virtualization: kvm guest
	I0223 00:38:46.136871  365100 out.go:177] * [functional-511250] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0223 00:38:46.138840  365100 out.go:177]   - MINIKUBE_LOCATION=18233
	I0223 00:38:46.140372  365100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 00:38:46.138882  365100 notify.go:220] Checking for updates...
	I0223 00:38:46.142070  365100 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	I0223 00:38:46.143600  365100 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	I0223 00:38:46.145025  365100 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 00:38:46.146561  365100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 00:38:46.148581  365100 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 00:38:46.149220  365100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0223 00:38:46.172177  365100 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0223 00:38:46.172274  365100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:38:46.231838  365100 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-23 00:38:46.220980841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:38:46.231961  365100 docker.go:295] overlay module found
	I0223 00:38:46.233709  365100 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0223 00:38:46.235650  365100 start.go:299] selected driver: docker
	I0223 00:38:46.235673  365100 start.go:903] validating driver "docker" against &{Name:functional-511250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-511250 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0223 00:38:46.235803  365100 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 00:38:46.238147  365100 out.go:177] 
	W0223 00:38:46.239603  365100 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 00:38:46.241168  365100 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-511250 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-511250 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-w52cc" [832decf9-d612-4ad2-a137-46712a3070a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-w52cc" [832decf9-d612-4ad2-a137-46712a3070a1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004228706s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30155
functional_test.go:1671: http://192.168.49.2:30155: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-w52cc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30155
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0ad4e5bc-ede6-41c7-9bed-5a1cd4071fa9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004108015s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-511250 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-511250 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-511250 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-511250 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bdab63ef-ac56-4c95-8d3c-55c0d117b097] Pending
helpers_test.go:344: "sp-pod" [bdab63ef-ac56-4c95-8d3c-55c0d117b097] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bdab63ef-ac56-4c95-8d3c-55c0d117b097] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.024497705s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-511250 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-511250 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-511250 delete -f testdata/storage-provisioner/pod.yaml: (1.856502953s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-511250 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ed02d52e-85f0-4f11-924a-fe36c8f16b1e] Pending
helpers_test.go:344: "sp-pod" [ed02d52e-85f0-4f11-924a-fe36c8f16b1e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ed02d52e-85f0-4f11-924a-fe36c8f16b1e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00530105s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-511250 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh -n functional-511250 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cp functional-511250:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2169721726/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh -n functional-511250 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh -n functional-511250 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-511250 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5nw65" [9a007d58-49cf-4748-9619-8b3d2e381c56] Pending
helpers_test.go:344: "mysql-859648c796-5nw65" [9a007d58-49cf-4748-9619-8b3d2e381c56] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5nw65" [9a007d58-49cf-4748-9619-8b3d2e381c56] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003414544s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-511250 exec mysql-859648c796-5nw65 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-511250 exec mysql-859648c796-5nw65 -- mysql -ppassword -e "show databases;": exit status 1 (107.060214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-511250 exec mysql-859648c796-5nw65 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/324375/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /etc/test/nested/copy/324375/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/324375.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /etc/ssl/certs/324375.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/324375.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /usr/share/ca-certificates/324375.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3243752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /etc/ssl/certs/3243752.pem"
2024/02/23 00:38:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3243752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /usr/share/ca-certificates/3243752.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-511250 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh "sudo systemctl is-active crio": exit status 1 (296.553441ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511250 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-511250
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-511250
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511250 image ls --format short --alsologtostderr:
I0223 00:39:02.794327  370735 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:02.794469  370735 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:02.794481  370735 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:02.794488  370735 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:02.794722  370735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:02.795458  370735 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:02.795607  370735 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:02.796088  370735 cli_runner.go:164] Run: docker container inspect functional-511250 --format={{.State.Status}}
I0223 00:39:02.816229  370735 ssh_runner.go:195] Run: systemctl --version
I0223 00:39:02.816280  370735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511250
I0223 00:39:02.837560  370735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/functional-511250/id_rsa Username:docker}
I0223 00:39:02.931265  370735 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511250 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-511250 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-511250 | f3408ba5ad7c4 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511250 image ls --format table --alsologtostderr:
I0223 00:39:03.313727  370940 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:03.313831  370940 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.313835  370940 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:03.313839  370940 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.314026  370940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:03.314931  370940 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.315089  370940 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.315726  370940 cli_runner.go:164] Run: docker container inspect functional-511250 --format={{.State.Status}}
I0223 00:39:03.335962  370940 ssh_runner.go:195] Run: systemctl --version
I0223 00:39:03.336011  370940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511250
I0223 00:39:03.354969  370940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/functional-511250/id_rsa Username:docker}
I0223 00:39:03.454482  370940 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511250 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-511250"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"}
,{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e",
"repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f3408ba5ad7c48e00bed6c0d3884f147d4d7a006e13db83321b7626bace97e52","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-511250"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511250 image ls --format json --alsologtostderr:
I0223 00:39:03.076172  370833 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:03.076634  370833 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.076654  370833 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:03.076660  370833 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.077090  370833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:03.078007  370833 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.078186  370833 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.078747  370833 cli_runner.go:164] Run: docker container inspect functional-511250 --format={{.State.Status}}
I0223 00:39:03.099129  370833 ssh_runner.go:195] Run: systemctl --version
I0223 00:39:03.099186  370833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511250
I0223 00:39:03.121354  370833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/functional-511250/id_rsa Username:docker}
I0223 00:39:03.214829  370833 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511250 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-511250
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: f3408ba5ad7c48e00bed6c0d3884f147d4d7a006e13db83321b7626bace97e52
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-511250
size: "30"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511250 image ls --format yaml --alsologtostderr:
I0223 00:39:02.824672  370745 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:02.824856  370745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:02.824864  370745 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:02.824868  370745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:02.825099  370745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:02.825703  370745 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:02.825806  370745 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:02.826309  370745 cli_runner.go:164] Run: docker container inspect functional-511250 --format={{.State.Status}}
I0223 00:39:02.843402  370745 ssh_runner.go:195] Run: systemctl --version
I0223 00:39:02.843471  370745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511250
I0223 00:39:02.865620  370745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/functional-511250/id_rsa Username:docker}
I0223 00:39:02.971173  370745 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh pgrep buildkitd: exit status 1 (296.632754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image build -t localhost/my-image:functional-511250 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image build -t localhost/my-image:functional-511250 testdata/build --alsologtostderr: (2.357965179s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511250 image build -t localhost/my-image:functional-511250 testdata/build --alsologtostderr:
I0223 00:39:03.332062  370951 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:03.332188  370951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.332198  370951 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:03.332209  370951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:03.335231  370951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:03.336228  370951 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.336861  370951 config.go:182] Loaded profile config "functional-511250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0223 00:39:03.337433  370951 cli_runner.go:164] Run: docker container inspect functional-511250 --format={{.State.Status}}
I0223 00:39:03.358239  370951 ssh_runner.go:195] Run: systemctl --version
I0223 00:39:03.358307  370951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511250
I0223 00:39:03.375152  370951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/functional-511250/id_rsa Username:docker}
I0223 00:39:03.466963  370951 build_images.go:151] Building image from path: /tmp/build.1239998291.tar
I0223 00:39:03.467029  370951 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0223 00:39:03.480217  370951 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1239998291.tar
I0223 00:39:03.484204  370951 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1239998291.tar: stat -c "%s %y" /var/lib/minikube/build/build.1239998291.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1239998291.tar': No such file or directory
I0223 00:39:03.484232  370951 ssh_runner.go:362] scp /tmp/build.1239998291.tar --> /var/lib/minikube/build/build.1239998291.tar (3072 bytes)
I0223 00:39:03.509614  370951 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1239998291
I0223 00:39:03.518355  370951 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1239998291 -xf /var/lib/minikube/build/build.1239998291.tar
I0223 00:39:03.527891  370951 docker.go:360] Building image: /var/lib/minikube/build/build.1239998291
I0223 00:39:03.527956  370951 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-511250 /var/lib/minikube/build/build.1239998291
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6efc9ced9e98c825329ed92ecabb51cbd829736bcb7c487c94a712c1e05eb35d done
#8 naming to localhost/my-image:functional-511250 done
#8 DONE 0.0s
I0223 00:39:05.589153  370951 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-511250 /var/lib/minikube/build/build.1239998291: (2.061168392s)
I0223 00:39:05.589224  370951 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1239998291
I0223 00:39:05.599354  370951 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1239998291.tar
I0223 00:39:05.608697  370951 build_images.go:207] Built localhost/my-image:functional-511250 from /tmp/build.1239998291.tar
I0223 00:39:05.608731  370951 build_images.go:123] succeeded building to: functional-511250
I0223 00:39:05.608738  370951 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-511250
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-511250 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-511250 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-v4spb" [1d301910-8f50-4ef4-bbd0-ec0ad0834c46] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-v4spb" [1d301910-8f50-4ef4-bbd0-ec0ad0834c46] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005124256s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr: (3.524156067s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr: (2.475037177s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-511250
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image load --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr: (4.604120865s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service list -o json
functional_test.go:1490: Took "444.948597ms" to run "out/minikube-linux-amd64 -p functional-511250 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "345.801485ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "82.480667ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32016
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "384.451407ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "72.07445ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32016
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdany-port2717164134/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708648726248877841" to /tmp/TestFunctionalparallelMountCmdany-port2717164134/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708648726248877841" to /tmp/TestFunctionalparallelMountCmdany-port2717164134/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708648726248877841" to /tmp/TestFunctionalparallelMountCmdany-port2717164134/001/test-1708648726248877841
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.564562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 00:38 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 00:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 00:38 test-1708648726248877841
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh cat /mount-9p/test-1708648726248877841
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-511250 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [95410d40-10f9-4a07-9ee7-54f46c1ed29d] Pending
helpers_test.go:344: "busybox-mount" [95410d40-10f9-4a07-9ee7-54f46c1ed29d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [95410d40-10f9-4a07-9ee7-54f46c1ed29d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [95410d40-10f9-4a07-9ee7-54f46c1ed29d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003641028s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-511250 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdany-port2717164134/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image save gcr.io/google-containers/addon-resizer:functional-511250 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image rm gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.638612719s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-511250
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 image save --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-511250 image save --daemon gcr.io/google-containers/addon-resizer:functional-511250 --alsologtostderr: (2.345829421s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-511250
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-511250 docker-env) && out/minikube-linux-amd64 status -p functional-511250"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-511250 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdspecific-port4021348015/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.012391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdspecific-port4021348015/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511250 ssh "sudo umount -f /mount-9p": exit status 1 (285.623958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-511250 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdspecific-port4021348015/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 369299: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-511250 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0fa92aa8-d01b-41a9-87c9-70b9e52194a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0fa92aa8-d01b-41a9-87c9-70b9e52194a6] Running
E0223 00:39:15.087555  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.093412  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.104033  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.124295  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.164612  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.244925  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.405681  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:15.726296  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:16.366939  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:17.647258  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.004078233s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511250 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-511250 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3777635222/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-511250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.90.192 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-511250 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-511250
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-511250
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-511250
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-550912 --driver=docker  --container-runtime=docker
E0223 00:39:25.328548  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:39:35.569120  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-550912 --driver=docker  --container-runtime=docker: (21.973433689s)
--- PASS: TestImageBuild/serial/Setup (21.97s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-550912
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-550912: (1.099661859s)
--- PASS: TestImageBuild/serial/NormalBuild (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-550912
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-550912
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-550912
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-918590 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-918590 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (39.623507408s)
--- PASS: TestJSONOutput/start/Command (39.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-918590 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-918590 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-918590 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-918590 --output=json --user=testUser: (5.678325381s)
--- PASS: TestJSONOutput/stop/Command (5.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-839359 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-839359 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.136547ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d94537bd-505b-45c7-94cc-a99dbe8ed4c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-839359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82016ead-3af2-42a5-854c-f62f5a49b0d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18233"}}
	{"specversion":"1.0","id":"64f3ca1d-0f02-44d5-a4ed-185253a5a52f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0434dc87-803a-4cac-8c53-15ef342affc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig"}}
	{"specversion":"1.0","id":"47a3ea26-034e-4268-b508-eb0e675f3c22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube"}}
	{"specversion":"1.0","id":"100da7a6-a918-4f33-90b7-a753447d2cac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"743fbe04-8e42-42e5-8aba-938b3e5926e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a65d0ba3-0e35-42cc-992b-cdb546fa9094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-839359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-839359
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-047011 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-047011 --network=: (26.089253117s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-047011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-047011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-047011: (2.080828369s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-447040 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-447040 --network=bridge: (22.329754052s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-447040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-447040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-447040: (1.923817796s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.27s)

                                                
                                    
x
+
TestKicExistingNetwork (24.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-612625 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-612625 --network=existing-network: (22.16284683s)
helpers_test.go:175: Cleaning up "existing-network-612625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-612625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-612625: (1.848074206s)
--- PASS: TestKicExistingNetwork (24.14s)

                                                
                                    
x
+
TestKicCustomSubnet (23.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-968176 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-968176 --subnet=192.168.60.0/24: (21.12735116s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-968176 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-968176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-968176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-968176: (2.074649045s)
--- PASS: TestKicCustomSubnet (23.22s)

                                                
                                    
x
+
TestKicStaticIP (26.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-163870 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-163870 --static-ip=192.168.200.200: (24.585497873s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-163870 ip
helpers_test.go:175: Cleaning up "static-ip-163870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-163870
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-163870: (1.990385532s)
--- PASS: TestKicStaticIP (26.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-147418 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-147418 --driver=docker  --container-runtime=docker: (21.862730004s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-150743 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-150743 --driver=docker  --container-runtime=docker: (25.604546389s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-147418
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-150743
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-150743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-150743
E0223 00:53:35.080279  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-150743: (2.036446502s)
helpers_test.go:175: Cleaning up "first-147418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-147418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-147418: (2.081847644s)
--- PASS: TestMinikubeProfile (52.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-143653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-143653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.155131347s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-143653 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-158207 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-158207 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.917830508s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158207 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-143653 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-143653 --alsologtostderr -v=5: (1.445061778s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158207 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-158207
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-158207: (1.177901829s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-158207
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-158207: (6.421687169s)
--- PASS: TestMountStart/serial/RestartStopped (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158207 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0223 00:54:15.087377  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m9.794023408s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (34.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-030690 -- rollout status deployment/busybox: (1.95480236s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0223 00:55:38.134421  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-2b9xt -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-swbkq -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-2b9xt -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-swbkq -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-2b9xt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-swbkq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (34.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-2b9xt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-2b9xt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-swbkq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030690 -- exec busybox-5b5d89c9d6-swbkq -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030690 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-030690 -v 3 --alsologtostderr: (17.566138366s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-030690 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp testdata/cp-test.txt multinode-030690:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3607714745/001/cp-test_multinode-030690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690:/home/docker/cp-test.txt multinode-030690-m02:/home/docker/cp-test_multinode-030690_multinode-030690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test_multinode-030690_multinode-030690-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690:/home/docker/cp-test.txt multinode-030690-m03:/home/docker/cp-test_multinode-030690_multinode-030690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test_multinode-030690_multinode-030690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp testdata/cp-test.txt multinode-030690-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3607714745/001/cp-test_multinode-030690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m02:/home/docker/cp-test.txt multinode-030690:/home/docker/cp-test_multinode-030690-m02_multinode-030690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test_multinode-030690-m02_multinode-030690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m02:/home/docker/cp-test.txt multinode-030690-m03:/home/docker/cp-test_multinode-030690-m02_multinode-030690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test_multinode-030690-m02_multinode-030690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp testdata/cp-test.txt multinode-030690-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3607714745/001/cp-test_multinode-030690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m03:/home/docker/cp-test.txt multinode-030690:/home/docker/cp-test_multinode-030690-m03_multinode-030690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690 "sudo cat /home/docker/cp-test_multinode-030690-m03_multinode-030690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 cp multinode-030690-m03:/home/docker/cp-test.txt multinode-030690-m02:/home/docker/cp-test_multinode-030690-m03_multinode-030690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 ssh -n multinode-030690-m02 "sudo cat /home/docker/cp-test_multinode-030690-m03_multinode-030690-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-030690 node stop m03: (1.184173708s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030690 status: exit status 7 (461.981946ms)

                                                
                                                
-- stdout --
	multinode-030690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr: exit status 7 (468.46671ms)

                                                
                                                
-- stdout --
	multinode-030690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:56:21.519778  450517 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:56:21.520430  450517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:56:21.520450  450517 out.go:304] Setting ErrFile to fd 2...
	I0223 00:56:21.520459  450517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:56:21.520994  450517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:56:21.521523  450517 out.go:298] Setting JSON to false
	I0223 00:56:21.521562  450517 mustload.go:65] Loading cluster: multinode-030690
	I0223 00:56:21.521678  450517 notify.go:220] Checking for updates...
	I0223 00:56:21.522029  450517 config.go:182] Loaded profile config "multinode-030690": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 00:56:21.522061  450517 status.go:255] checking status of multinode-030690 ...
	I0223 00:56:21.522510  450517 cli_runner.go:164] Run: docker container inspect multinode-030690 --format={{.State.Status}}
	I0223 00:56:21.542645  450517 status.go:330] multinode-030690 host status = "Running" (err=<nil>)
	I0223 00:56:21.542686  450517 host.go:66] Checking if "multinode-030690" exists ...
	I0223 00:56:21.543063  450517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030690
	I0223 00:56:21.559562  450517 host.go:66] Checking if "multinode-030690" exists ...
	I0223 00:56:21.559806  450517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 00:56:21.559872  450517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030690
	I0223 00:56:21.576689  450517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/multinode-030690/id_rsa Username:docker}
	I0223 00:56:21.671056  450517 ssh_runner.go:195] Run: systemctl --version
	I0223 00:56:21.674955  450517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 00:56:21.684724  450517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 00:56:21.735199  450517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-23 00:56:21.725277862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 00:56:21.735751  450517 kubeconfig.go:92] found "multinode-030690" server: "https://192.168.58.2:8443"
	I0223 00:56:21.735775  450517 api_server.go:166] Checking apiserver status ...
	I0223 00:56:21.735806  450517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 00:56:21.746157  450517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2383/cgroup
	I0223 00:56:21.754907  450517 api_server.go:182] apiserver freezer: "10:freezer:/docker/4e348aae8b12b75b1e3f73023e253453d52a63035918a6ea2d4bc722dd6a66e8/kubepods/burstable/pod873a274e022e19d878a9c420425469a5/e3ef9544c10e59dfee3ba9822ca056dc4391c9732f3872db452108e5c45f8948"
	I0223 00:56:21.754967  450517 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4e348aae8b12b75b1e3f73023e253453d52a63035918a6ea2d4bc722dd6a66e8/kubepods/burstable/pod873a274e022e19d878a9c420425469a5/e3ef9544c10e59dfee3ba9822ca056dc4391c9732f3872db452108e5c45f8948/freezer.state
	I0223 00:56:21.762449  450517 api_server.go:204] freezer state: "THAWED"
	I0223 00:56:21.762484  450517 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0223 00:56:21.766392  450517 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0223 00:56:21.766414  450517 status.go:421] multinode-030690 apiserver status = Running (err=<nil>)
	I0223 00:56:21.766425  450517 status.go:257] multinode-030690 status: &{Name:multinode-030690 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 00:56:21.766443  450517 status.go:255] checking status of multinode-030690-m02 ...
	I0223 00:56:21.766676  450517 cli_runner.go:164] Run: docker container inspect multinode-030690-m02 --format={{.State.Status}}
	I0223 00:56:21.783700  450517 status.go:330] multinode-030690-m02 host status = "Running" (err=<nil>)
	I0223 00:56:21.783725  450517 host.go:66] Checking if "multinode-030690-m02" exists ...
	I0223 00:56:21.783954  450517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030690-m02
	I0223 00:56:21.799105  450517 host.go:66] Checking if "multinode-030690-m02" exists ...
	I0223 00:56:21.799405  450517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 00:56:21.799460  450517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030690-m02
	I0223 00:56:21.814790  450517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/multinode-030690-m02/id_rsa Username:docker}
	I0223 00:56:21.902901  450517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 00:56:21.912939  450517 status.go:257] multinode-030690-m02 status: &{Name:multinode-030690-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 00:56:21.912975  450517 status.go:255] checking status of multinode-030690-m03 ...
	I0223 00:56:21.913271  450517 cli_runner.go:164] Run: docker container inspect multinode-030690-m03 --format={{.State.Status}}
	I0223 00:56:21.929738  450517 status.go:330] multinode-030690-m03 host status = "Stopped" (err=<nil>)
	I0223 00:56:21.929765  450517 status.go:343] host is not running, skipping remaining checks
	I0223 00:56:21.929779  450517 status.go:257] multinode-030690-m03 status: &{Name:multinode-030690-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-030690 node start m03 --alsologtostderr: (10.82202011s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (91.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030690
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-030690
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-030690: (22.336461555s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030690 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030690 --wait=true -v=8 --alsologtostderr: (1m8.728843396s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030690
--- PASS: TestMultiNode/serial/RestartKeepsNodes (91.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-030690 node delete m03: (4.065240311s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-030690 stop: (21.194687049s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030690 status: exit status 7 (93.731392ms)

                                                
                                                
-- stdout --
	multinode-030690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr: exit status 7 (92.25952ms)

                                                
                                                
-- stdout --
	multinode-030690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 00:58:30.614014  467061 out.go:291] Setting OutFile to fd 1 ...
	I0223 00:58:30.614182  467061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:58:30.614193  467061 out.go:304] Setting ErrFile to fd 2...
	I0223 00:58:30.614199  467061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0223 00:58:30.614416  467061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
	I0223 00:58:30.614629  467061 out.go:298] Setting JSON to false
	I0223 00:58:30.614671  467061 mustload.go:65] Loading cluster: multinode-030690
	I0223 00:58:30.614722  467061 notify.go:220] Checking for updates...
	I0223 00:58:30.615104  467061 config.go:182] Loaded profile config "multinode-030690": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0223 00:58:30.615123  467061 status.go:255] checking status of multinode-030690 ...
	I0223 00:58:30.615560  467061 cli_runner.go:164] Run: docker container inspect multinode-030690 --format={{.State.Status}}
	I0223 00:58:30.632222  467061 status.go:330] multinode-030690 host status = "Stopped" (err=<nil>)
	I0223 00:58:30.632266  467061 status.go:343] host is not running, skipping remaining checks
	I0223 00:58:30.632280  467061 status.go:257] multinode-030690 status: &{Name:multinode-030690 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 00:58:30.632320  467061 status.go:255] checking status of multinode-030690-m02 ...
	I0223 00:58:30.632589  467061 cli_runner.go:164] Run: docker container inspect multinode-030690-m02 --format={{.State.Status}}
	I0223 00:58:30.649761  467061 status.go:330] multinode-030690-m02 host status = "Stopped" (err=<nil>)
	I0223 00:58:30.649786  467061 status.go:343] host is not running, skipping remaining checks
	I0223 00:58:30.649792  467061 status.go:257] multinode-030690-m02 status: &{Name:multinode-030690-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0223 00:58:35.080014  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:59:15.086735  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.527918509s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030690 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030690
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030690-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-030690-m02 --driver=docker  --container-runtime=docker: exit status 14 (76.937107ms)

                                                
                                                
-- stdout --
	* [multinode-030690-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-030690-m02' is duplicated with machine name 'multinode-030690-m02' in profile 'multinode-030690'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030690-m03 --driver=docker  --container-runtime=docker
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030690-m03 --driver=docker  --container-runtime=docker: (24.451066198s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030690
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-030690: exit status 80 (278.612403ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-030690
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-030690-m03 already exists in multinode-030690-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-030690-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-030690-m03: (2.061919883s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.93s)

                                                
                                    
x
+
TestPreload (147.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m31.252810353s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753305 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-753305
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-753305: (10.607365506s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753305 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753305 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (42.736432522s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753305 image list
helpers_test.go:175: Cleaning up "test-preload-753305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-753305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-753305: (2.103096866s)
--- PASS: TestPreload (147.53s)

                                                
                                    
x
+
TestScheduledStopUnix (98.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-627505 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-627505 --memory=2048 --driver=docker  --container-runtime=docker: (25.146185716s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627505 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-627505 -n scheduled-stop-627505
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627505 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627505 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627505 -n scheduled-stop-627505
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-627505
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627505 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 01:03:35.080460  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-627505
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-627505: exit status 7 (80.071667ms)

                                                
                                                
-- stdout --
	scheduled-stop-627505
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627505 -n scheduled-stop-627505
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627505 -n scheduled-stop-627505: exit status 7 (78.989619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-627505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-627505
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-627505: (1.633251074s)
--- PASS: TestScheduledStopUnix (98.23s)

                                                
                                    
x
+
TestSkaffold (115.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe560868656 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-385162 --memory=2600 --driver=docker  --container-runtime=docker
E0223 01:04:15.087383  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-385162 --memory=2600 --driver=docker  --container-runtime=docker: (25.187969799s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe560868656 run --minikube-profile skaffold-385162 --kube-context skaffold-385162 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe560868656 run --minikube-profile skaffold-385162 --kube-context skaffold-385162 --status-check=true --port-forward=false --interactive=false: (1m15.306034101s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-75bcfb5754-vspxw" [47afbd14-4d47-4067-b134-0f6176a4092b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003508303s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5d79f479d9-ln8vd" [4e167837-dcef-4d85-b2e1-fa0960f40c2b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003789181s
helpers_test.go:175: Cleaning up "skaffold-385162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-385162
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-385162: (2.791455083s)
--- PASS: TestSkaffold (115.19s)

                                                
                                    
x
+
TestInsufficientStorage (13.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-691498 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-691498 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.97796605s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ffe463ba-abae-4423-904f-f290eea3c7ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-691498] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c82b5d79-2d44-4206-a3a1-4adbcd67cdf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18233"}}
	{"specversion":"1.0","id":"d5fc34c6-b941-4b5a-8118-eab9695d2d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f440ed1e-773e-4ec1-ab3e-3763a68e3e0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig"}}
	{"specversion":"1.0","id":"f919641a-b982-49b4-9c8d-ef45043225ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube"}}
	{"specversion":"1.0","id":"af365d51-d9ad-476a-9483-12a22c43fb16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c1f8ca2a-8631-464f-898e-05f7adef9b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f46894b-cbd9-4e05-81fe-11841401856c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fe23f2b1-918f-427e-a152-b841ce84c65d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ecdfb935-9b8d-4675-9687-5c8052ca498f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a611d4da-bf93-45fb-8ed1-ef1cdc6ccc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"db1f3ae5-1852-4a6a-b36a-4251945fe34a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-691498 in cluster insufficient-storage-691498","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"817291bc-b104-46e6-a978-626af1372085","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"26f9ef83-253d-4009-9fca-992b676e0c0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"48c460ad-e98a-4a50-9c78-a19b2a37b91e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-691498 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-691498 --output=json --layout=cluster: exit status 7 (266.97261ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-691498","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-691498","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:06:12.707206  507802 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-691498" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-691498 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-691498 --output=json --layout=cluster: exit status 7 (273.856357ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-691498","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-691498","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 01:06:12.980771  507892 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-691498" does not appear in /home/jenkins/minikube-integration/18233-317564/kubeconfig
	E0223 01:06:12.990809  507892 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/insufficient-storage-691498/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-691498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-691498
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-691498: (1.634025208s)
--- PASS: TestInsufficientStorage (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3258722243 start -p running-upgrade-450014 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3258722243 start -p running-upgrade-450014 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.99895378s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-450014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-450014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.914113086s)
helpers_test.go:175: Cleaning up "running-upgrade-450014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-450014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-450014: (2.117077168s)
--- PASS: TestRunningBinaryUpgrade (64.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.20507740 start -p missing-upgrade-619261 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.20507740 start -p missing-upgrade-619261 --memory=2200 --driver=docker  --container-runtime=docker: (1m9.796266493s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-619261
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-619261: (10.371394222s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-619261
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-619261 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-619261 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.30045792s)
helpers_test.go:175: Cleaning up "missing-upgrade-619261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-619261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-619261: (2.07074805s)
--- PASS: TestMissingContainerUpgrade (137.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.234021ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-598969] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598969 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598969 --driver=docker  --container-runtime=docker: (38.796169451s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-598969 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1790612533 start -p stopped-upgrade-607441 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1790612533 start -p stopped-upgrade-607441 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m3.622173937s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1790612533 -p stopped-upgrade-607441 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1790612533 -p stopped-upgrade-607441 stop: (10.800110107s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-607441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-607441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.725914619s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --driver=docker  --container-runtime=docker: (15.513119602s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-598969 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-598969 status -o json: exit status 2 (393.135967ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-598969","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-598969
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-598969: (1.846606965s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598969 --no-kubernetes --driver=docker  --container-runtime=docker: (8.756591985s)
--- PASS: TestNoKubernetes/serial/Start (8.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-598969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-598969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.354027ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.497915241s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (4.155828847s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-598969
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-598969: (1.220995937s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598969 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598969 --driver=docker  --container-runtime=docker: (7.027693535s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-598969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-598969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.190905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-607441
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-607441: (1.425515956s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestPause/serial/Start (75.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-976325 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0223 01:09:15.086784  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-976325 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m15.57857319s)
--- PASS: TestPause/serial/Start (75.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m5.938911534s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-976325 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0223 01:10:47.666454  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.671713  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.681990  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.702324  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.742631  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.822955  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:47.983215  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:48.303495  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:48.943848  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:50.224851  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:10:52.785237  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-976325 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.986136407s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-976325 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-976325 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-976325 --output=json --layout=cluster: exit status 2 (296.146505ms)

                                                
                                                
-- stdout --
	{"Name":"pause-976325","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-976325","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-976325 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-976325 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-976325 --alsologtostderr -v=5
E0223 01:10:57.905526  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-976325 --alsologtostderr -v=5: (2.059010453s)
--- PASS: TestPause/serial/DeletePaused (2.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-976325
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-976325: exit status 1 (15.136917ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-976325: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.467831635s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zv95d" [16918936-4d8b-4136-9029-e8c174d2c8d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 01:11:08.145906  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zv95d" [16918936-4d8b-4136-9029-e8c174d2c8d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003794884s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.888596862s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f8rk6" [19feb4bc-3843-46c8-a4d2-87574726470a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004114428s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (53.407643491s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vj96" [626737a9-706b-4aa7-9065-e3116a9bbf82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2vj96" [626737a9-706b-4aa7-9065-e3116a9bbf82] Running
E0223 01:12:09.586980  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004548454s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (40.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (40.037652514s)
--- PASS: TestNetworkPlugins/group/false/Start (40.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p9b46" [1d28456f-d5f8-432a-9493-a922e6774f86] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004508059s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dx84d" [8baf3e79-4c48-410b-af5a-a6dda37e42aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dx84d" [8baf3e79-4c48-410b-af5a-a6dda37e42aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003828675s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8nj7n" [9190b6a5-b527-4968-8871-a2f3408052a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8nj7n" [9190b6a5-b527-4968-8871-a2f3408052a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004062525s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jp87n" [26dbc18b-2753-4f45-80b3-ae0a8f84a5f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jp87n" [26dbc18b-2753-4f45-80b3-ae0a8f84a5f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004041787s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m19.266147516s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (58.21444293s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0223 01:14:15.086848  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (43.396786615s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4nxwq" [1698673d-d6ab-4ce4-9aa2-673d12a03d3d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004348946s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p5h9p" [1779164f-46cc-4b47-afea-c5b3df1d890b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p5h9p" [1779164f-46cc-4b47-afea-c5b3df1d890b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004809631s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8k856" [d253a10e-9df5-45d6-91ce-e8ea0408a79c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8k856" [d253a10e-9df5-45d6-91ce-e8ea0408a79c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003759905s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w5pwj" [7c3a0d47-0703-45cd-886b-855076ddef5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w5pwj" [7c3a0d47-0703-45cd-886b-855076ddef5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003886639s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-600346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (49.320944086s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-157588 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-157588 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (51.551241005s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-600346 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-600346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22rsj" [b366c872-dbc7-4a06-ae11-69cf7b2eecab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 01:15:47.666658  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-22rsj" [b366c872-dbc7-4a06-ae11-69cf7b2eecab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004289223s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-600346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-600346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0223 01:23:18.155704  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-157588 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e4b43f39-0e4a-4272-b244-d1359f255587] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e4b43f39-0e4a-4272-b244-d1359f255587] Running
E0223 01:16:05.270975  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.276268  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.286748  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.307746  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.348049  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.428412  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.588702  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:05.909560  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:06.549739  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:07.830154  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004056108s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-157588 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-157588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 01:16:10.391253  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-157588 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-157588 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-157588 --alsologtostderr -v=3: (10.831793085s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-039066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0223 01:16:15.348713  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:16:15.512028  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-039066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (40.620796542s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-157588 -n no-preload-157588
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-157588 -n no-preload-157588: exit status 7 (137.493629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-157588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-157588 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0223 01:16:25.752540  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:16:38.128610  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 01:16:46.233175  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-157588 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m15.288117886s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-157588 -n no-preload-157588
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-039066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d889dea3-d481-42bb-8854-bc44049f4f9d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0223 01:16:55.031096  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.036362  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.046595  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.066874  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.107364  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.188488  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.348803  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:55.669341  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:16:56.310001  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d889dea3-d481-42bb-8854-bc44049f4f9d] Running
E0223 01:16:57.590184  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:17:00.150798  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004061243s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-039066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-039066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-039066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-039066 --alsologtostderr -v=3
E0223 01:17:05.271960  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-039066 --alsologtostderr -v=3: (10.660746873s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-039066 -n embed-certs-039066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-039066 -n embed-certs-039066: exit status 7 (79.542301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-039066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (563.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-039066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0223 01:17:15.512200  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:17:27.194346  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:17:35.992590  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:17:43.125710  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.131031  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.142008  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.162361  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.202630  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.282956  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.443418  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:43.764037  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:44.404308  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:45.685069  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:48.246023  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:50.470911  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.476167  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.486428  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.506691  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.547015  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.627404  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:50.787821  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:51.108419  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:51.749589  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:53.030394  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:17:53.367021  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:17:55.590909  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:18:00.711263  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:18:03.608176  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:18:10.952005  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:18:13.560512  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.565811  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.576089  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.596375  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.636636  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.716942  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:13.877293  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:14.197866  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:14.838953  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:16.120171  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:16.952817  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:18:18.680841  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:23.801103  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:24.088981  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:18:31.432642  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:18:34.041620  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:18:35.079895  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 01:18:49.115162  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
E0223 01:18:54.522373  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:19:05.050017  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:19:12.393425  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:19:15.087006  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 01:19:21.124602  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.129883  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.140642  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.160917  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.201243  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.281570  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.441971  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:21.762939  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:22.403381  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:23.683861  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:26.244776  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:26.321985  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.327267  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.337545  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.357802  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.398085  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.478417  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.638807  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:26.959273  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:27.600202  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:28.880595  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:31.365518  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:31.440735  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:35.482652  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
E0223 01:19:36.561846  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:38.873037  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:19:40.456351  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.461594  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.471894  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.492274  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.532595  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.612899  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:40.773297  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:41.093920  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:41.606707  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:19:41.734910  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:43.015283  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:45.576102  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:19:46.802809  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:19:50.696453  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:20:00.936716  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:20:02.087643  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:20:07.283840  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
E0223 01:20:21.417546  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
E0223 01:20:26.970727  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
E0223 01:20:34.314551  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:20:43.048089  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
E0223 01:20:46.241578  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.246834  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.257022  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.277287  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.317574  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.398099  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.558552  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:46.878682  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:47.519740  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:20:47.665772  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/skaffold-385162/client.crt: no such file or directory
E0223 01:20:48.245030  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-039066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (9m23.136245735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-039066 -n embed-certs-039066
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (563.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643873 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0223 01:21:27.202503  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:21:32.956119  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/auto-600346/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643873 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (38.002772604s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6z977" [d6b55095-2795-49f8-bce9-99a40fceceee] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6z977" [d6b55095-2795-49f8-bce9-99a40fceceee] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003723735s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6z977" [d6b55095-2795-49f8-bce9-99a40fceceee] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004288404s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-157588 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-157588 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-157588 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-157588 -n no-preload-157588
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-157588 -n no-preload-157588: exit status 2 (323.538412ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-157588 -n no-preload-157588
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-157588 -n no-preload-157588: exit status 2 (342.809526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-157588 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-157588 -n no-preload-157588
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-157588 -n no-preload-157588
E0223 01:21:55.031504  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-538058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-538058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (36.68959567s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-643873 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [73345765-4b8c-42ab-85b2-70ce43cddc6f] Pending
helpers_test.go:344: "busybox" [73345765-4b8c-42ab-85b2-70ce43cddc6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [73345765-4b8c-42ab-85b2-70ce43cddc6f] Running
E0223 01:22:04.968372  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/flannel-600346/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004396085s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-643873 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-643873 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-643873 --alsologtostderr -v=3
E0223 01:22:08.163269  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kubenet-600346/client.crt: no such file or directory
E0223 01:22:10.165400  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/bridge-600346/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-643873 --alsologtostderr -v=3: (10.843318973s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873: exit status 7 (122.73839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-643873 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643873 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0223 01:22:22.713757  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/kindnet-600346/client.crt: no such file or directory
E0223 01:22:24.298643  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/enable-default-cni-600346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643873 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (9m19.446207534s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-538058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-538058 --alsologtostderr -v=3
E0223 01:22:43.126618  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-538058 --alsologtostderr -v=3: (10.774688617s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538058 -n newest-cni-538058
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538058 -n newest-cni-538058: exit status 7 (144.622446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-538058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-538058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0223 01:22:50.470826  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/custom-flannel-600346/client.crt: no such file or directory
E0223 01:23:10.811567  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/calico-600346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-538058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (25.848264176s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538058 -n newest-cni-538058
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-538058 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-538058 --alsologtostderr -v=1
E0223 01:23:13.560078  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/false-600346/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538058 -n newest-cni-538058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538058 -n newest-cni-538058: exit status 2 (297.085256ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538058 -n newest-cni-538058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538058 -n newest-cni-538058: exit status 2 (299.45232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-538058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538058 -n newest-cni-538058
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538058 -n newest-cni-538058
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-799707 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-799707 --alsologtostderr -v=3: (1.198792307s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799707 -n old-k8s-version-799707: exit status 7 (76.636404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-799707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nvxrt" [78f76cae-b836-428a-b11f-7e7d76106107] Running
E0223 01:26:42.476754  324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/no-preload-157588/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00443656s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nvxrt" [78f76cae-b836-428a-b11f-7e7d76106107] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003667381s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-039066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-039066 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-039066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-039066 -n embed-certs-039066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-039066 -n embed-certs-039066: exit status 2 (297.282034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-039066 -n embed-certs-039066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-039066 -n embed-certs-039066: exit status 2 (299.357647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-039066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-039066 -n embed-certs-039066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-039066 -n embed-certs-039066
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-htdfd" [f99c5ea0-b3dd-42de-a304-d09811656822] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00436631s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-htdfd" [f99c5ea0-b3dd-42de-a304-d09811656822] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004016641s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-643873 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643873 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-643873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873: exit status 2 (294.435612ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873: exit status 2 (297.520137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-643873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-643873 -n default-k8s-diff-port-643873
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.40s)

                                                
                                    

Test skip (23/330)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-600346 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-600346" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 23 Feb 2024 01:07:23 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-619261
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 23 Feb 2024 01:07:42 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-607441
contexts:
- context:
cluster: missing-upgrade-619261
extensions:
- extension:
last-update: Fri, 23 Feb 2024 01:07:23 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-619261
name: missing-upgrade-619261
- context:
cluster: stopped-upgrade-607441
user: stopped-upgrade-607441
name: stopped-upgrade-607441
current-context: stopped-upgrade-607441
kind: Config
preferences: {}
users:
- name: missing-upgrade-619261
user:
client-certificate: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/missing-upgrade-619261/client.crt
client-key: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/missing-upgrade-619261/client.key
- name: stopped-upgrade-607441
user:
client-certificate: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/stopped-upgrade-607441/client.crt
client-key: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/stopped-upgrade-607441/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-600346

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-600346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600346"

                                                
                                                
----------------------- debugLogs end: cilium-600346 [took: 4.239568278s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-600346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-600346
--- SKIP: TestNetworkPlugins/group/cilium (4.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-728912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-728912
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard